If all rms are identical, the result is the unique such equilibrium. = (Q_1 + \beta B_1' {\mathcal D}_1( P_{1t+1}) B_1)^{-1} rules for firms 1 and 2. This lecture describes the concept of Markov perfect equilibrium. After these equations have been solved, we can also deduce associated sequences of worst-case shocks. equilibrium objects presented. To dig a little beneath the forces driving these outcomes, we want to plot $ q_{1t} $ \pi_i(q_i, q_{-i}, \hat q_i) = a_0 q_i - a_1 q_i^2 - a_1 q_i q_{-i} - \gamma (\hat q_i - q_i)^2 , \tag{12} As in Markov perfect equilibrium, a key insight here is that equations (6) and (8) are linear in $ F_{1t} $ and $ F_{2t} $. <> preferences and state transition matrices. A Markov perfect equilibrium is a game-theoretic economic model of competition in situations where there are just a few competitors who watch each other, e.g. �$�-?c@N We see from the above graph that under robustness concerns, player 1 and o� ayT؝��ep�}�ע�mhr7���|��8�9��[�P���;4F"f�0����xM)���M�[J���k0I~E?5�E9:PN�p%�|�}M/s.Oǻ�Ij��C��ˋ�����(�c>�3/��rn���\E��T����'�
]N��3I� ����l���fC������֖C\���wx:v�'J����А��Q:z]��9� � ������dk�����X��\*akY=�f�^�2���UM���K#_�f����[���;G(瑿��0Ҍ&����㞸�Iĭ���7�:c��4xi��\�^v5�:���:͡��pz�_�dwm�SC@�4�:�tC&w��{�S
Here $ p = p_t $ is the price of the good, $ q_i = q_{it} $ is the output of firm $ i=1,2 $ at time $ t $ and $ a_0 > 0, a_1 >0 $. \Pi_{1t} - Markov perfect equilibrium is a key notion for analyzing economic problems involving dynamic strategic interaction, and a cornerstone of applied game theory. %PDF-1.5 For convenience, we’ll start with a finite horizon formulation, where $ t_0 $ is the initial date and $ t_1 $ is the common terminal date. We call such equilibria common information based Markov perfect equilibria of the game, which can be viewed as a refinement of Nash equilibrium in games with asymmetric information. Unfortunately, existence cannot be guaranteed under the conditions in Ericson and Pakes (1995). akin to a normal form game. in the Markov perfect equilibrium without robust firms. The following code prepares graphs that compare market-wide output $ q_{1t} + q_{2t} $ and the price of the good Two firms are the only producers of a good the demand for which is governed by a linear inverse demand function, $$ In practice, we usually fix $ t_1 $ and compute the equilibrium of an infinite horizon game by driving $ t_0 \rightarrow - \infty $. \Pi_{2t} - (\beta B_2' {\mathcal D}_2 ( P_{2t+1}) \Lambda_{2t} + \Gamma_{2t})' (Q_2 + \beta B_2' {\mathcal D}_2 ( P_{2t+1}) B_2)^{-1} x_{t+1} = A x_t + B_1 u_{1t} + B_2 u_{2t} + C v_{it} \tag{2} Equilibriummeans a level position: there is no more change in the distri-bution of X t as we wander through the Markov chain. problems, we again define the state and controls as. $$, while thinking that the state evolves according to, $$ $$. �����{���WF���N3VXk�iܝ��vw�1�J��rw�'a�-��]K�Z�����UK�B#���0+��Yt5�ނ�;$��YN��[g�����F�����;���!#�� 'Baseline Robust transition matrix AO is: Linear Markov Perfect Equilibria with Robust Agents, Creative Commons Attribution-ShareAlike 4.0 International, linear transition rules for the state vector. \mathcal D_1(P) := P + PC (\theta_1 I - C' P C)^{-1} C' P \tag{5} a short way of saying this is that misspecification fears are all ‘just in the minds’ of the firms. This means that the robust rules are the unique optimal rules (or best responses) to the indicated worst-case transition dynamics. firm $ 1 $ fears misspecification more than firm $ 2 $. examples, including stochastic games with endogenous shocks and a stochastic dynamic oligopoly model. Larger concerns about misspecification induce firm 1 to be more cautious than firm 2 in predicting market price and the output of the other firm. "Computed policies for firm 1 and firm 2: Compute the limit of a Nash linear quadratic dynamic game with, u_{it} +u_{it}' q_i u_{it} + u_{jt}' s_i u_{jt} + 2 u_{jt}', x_{it+1} = A x_t + b_1 u_{1t} + b_2 u_{2t} + C w_{it+1}, and a perceived control law :math:`u_j(t) = - f_j x_t` for the other. The strategies have the Markov property of memorylessness, meaning that each player's mixed strategy can be conditioned only on the state of the game. Player $ i $ employs linear decision rules $ u_{it} = - F_{it} x_t $, where $ F_{it} $ is a $ k_i \times n $ matrix. A Markov perfect equilibrium with robust agents will be characterized by. laws that are distorted relative to the baseline model. \theta_i v_{it}' v_{it} The term $ \theta_i v_{it}' v_{it} $ is a time $ t $ contribution to an entropy penalty that an (imaginary) loss-maximizing agent inside u_{1t}' Q_1 u_{1t} + (by ex-post we mean after extremization of each firm’s intertemporal objective). $ x_t $ is an $ n \times 1 $ state vector, $ u_{it} $ is a $ k_i \times 1 $ vector of controls for player $ i $, and, $ v_{it} $ is an $ h \times 1 $ vector of distortions to the state dynamics that concern player $ i $, $ \theta_i \in [\underline \theta_i, +\infty] $ is a scalar multiplier parameter of player $ i $, the imaginary loss-maximizing agent helps the loss-minimizing agent by helping him construct bounds on the behavior of his decision rule over a Applications. A robust Markov perfect equilibrium is a pair of sequences $ \{F_{1t}, F_{2t}\} $ and a pair of sequences $ \{K_{1t}, K_{2t}\} $ over $ t = t_0, \ldots, t_1 - 1 $ that satisfy, If we substitute $ u_{2t} = - F_{2t} x_t $ into (1) and (2), then player 1’s problem becomes minimization-maximization of, $$ \beta^{t - t_0} A Markov perfect equilibrium is an equilibrium concept in game theory.It is the refinement of the concept of subgame perfect equilibrium to extensive form games for ⦠Here in all cases $ t = t_0, \ldots, t_1 - 1 $ and the terminal conditions are $ P_{it_1} = 0 $. $ \Pi_{it} := R_i + F_{-it}' S_i F_{-it} $, the time subscript is suppressed when possible to simplify notation, $ \hat x $ denotes a next period value of variable $ x $. It has been used in analyses of industrial organization, macroeconomics, and political economy. CE�(�(�5whF �h؝�#���B��o��V��+j�/�A�*_᱔�ϱD܆�Q"��Ұԥ蕪�[r9�fx��z{��S��fx,�Xl��Rv���Υ↜��=m"}o�J�S�Z�9c��~���N�l��˰Z�gQb�
�/����T�S�UVz�L�t��\SI�V�֓��K��ykm
:�� Player employs linear decision rules ð = âð¹ð ð¥ , where ð¹ð is a ð× ðmatrix. Without concerns for robustness, the model is identical to the duopoly model from the Markov perfect equilibrium lecture. ([HS08a] discuss how this property of robust decision rules is connected to the concept of admissibility in Bayesian statistical decision theory). Player $ i $’s malevolent alter ego employs decision rules $ v_{it} = K_{it} x_t $ where $ K_{it} $ is an $ h \times n $ matrix. This refers to a (subgame) perfect equilibrium of the dynamic game where playersâ strategies depend only on the 1. current state. The term appeared in publications starting about 1988 in the work of economists Jean Tirole and Eric Maskin. �� The literature to date has exploited this observation to show the existence of subgame perfect equilibria (e.g., Mertens and Parthasarathy 1987, 1991) We will focus on settings with backward recursion on two sets of equations. with qe.nnash in the non-robustness case in which each $ \theta_i \approx +\infty $. $$. x��Z�n#7��+�Đ�"�$rK�[��[��C����&�쑭15�@K���ų���E�?�,d�p~9��������Z P����i�r2(�7����')�UJu��J�n���=����'�瓖*���
�IM�;�|��SZ��΅i���'�L�o��_/��|�(%�1�;i�!:��|:s�/
�-Jd��L�[�.� ��;�� U�$Q�H1\;**��KK��,Ϛ�>=%.A��*�� �k�����/����/).��Ph���r9�P�e��M�����5[���S�)[F�|m������K�b�i��b�����'������1�5��Q�6� �z~�j������p%#���u#�0���-I -�= But now one or more agents doubt that the baseline model is correctly specified. Moreover, existence by itself is not enough for two reasons. the Robustness lecture, namely, $$ Consequently, a Markov perfect equilibrium of a dynamic stochastic game must satisfy the conditions for Nash equilibrium of a certain family of reduced one-shot games. heterogeneity evolves endogenously in response to random occurrences, for example, in the investment process. Nonexistence of stationary Markov perfect equilibrium. $$. a pair of equations that express linear decision rules for worst-case shocks for each agent as functions of that agent’s continuation value function as well as parameters of $$, The matrix $ F_{1t} $ in the policy rule $ u_{1t} = - F_{1t} x_t $ that solves agent 1’s problem satisfies, $$ (link) w�p+�Q�J�6 �$ى۸!gyա��T/ӆvg�If�V����� ��&�T�9@�9Nv�C@*9�:��F=* �;#|B7tx��4��8"�pD�0$���H�9��. F_{2t} = (Q_2 + \beta B_2' {\mathcal D}_2( P_{2t+1} ) B_2)^{-1} %���� The agents express the possibility that their baseline specification is incorrect by adding a contribution $ C v_{it} $ to the time $ t $ transition law for the state. Keywords and Phrases: Oligopoly_Theory, Network_Externalities, Markov_Perfect-Equilibrium Weakly Undominated Equilibrium (SWUE) and Markov Trembling Hand Perfect Equilibrium (MTHPE), and show how these equilibrium concepts eliminate non-intuitive equilibria that arise naturally in dynamic voting games and games in which random or deterministic sequences of ⦠This completes our review of the duopoly model without concerns for robustness. agent $ i $’s mind charges for distorting the law of motion in a way that harms agent $ i $. $$, Substituting the inverse demand curve (10) into (11) lets us express the one-period payoff as, $$ The law of motion for the state $ x_t $ is $ x_{t+1} = A x_t + B_1 u_{1t} + B_2 u_{2t} $ where. O6A��@z��G��ߕ;� ��,.bd0XrfSa(��> U�;��'[��S�TɎ2bG��ם��ɢs/�j��P���'C��/B�/�V��AV�&.�j����B�^�`L�qY�S�Y�0JM��ՙ���(��pK��PXmZ,i"�dת2A�����,���ؿ�^_C/�D{�0J�z`0��Ǡ;�h�M�%k��ʨ��s�G�|�q�?Q\#��'}M�"�^�`z���`��1��Gs�#�ҧ;��VO��Z�������5�ƪ0��WB�.��sn�!t--�4te_j��`_%r7��6�uM*PV����� a pair of equations that express linear decision rules for each agent as functions of that agent’s continuation value function as well as parameters of. For multiperiod games in which the action spaces are finite in any period an MPE exists if the number of periods is finite or (with suitable continuity at infinity) infinite. To explore this, we study next how ex-post the two firms’ beliefs about state dynamics differ in the Markov perfect equilibrium with robust firms. A strategy profile is a Markov-perfect equilibrium (MPE) if it consists of only Markov strategies it is a Nash equilibrium regardless of the starting state Theorem. $$, where $ P_{1t} $ solves the matrix Riccati difference equation, $$ extending the function qe.nnash The player i also concerns about the model misspecification, The solution computed in this routine is the :math:`f_i` and, :math:`P_i` of the associated double optimal linear regulator, Corresponds to the MPE equations, should be of size (n, n), As above, size (n, c), c is the size of w, beta : scalar(float), optional(default=1.0), tol : scalar(float), optional(default=1e-8), This is the tolerance level for convergence, max_iter : scalar(int), optional(default=1000), This is the maximum number of iterations allowed, F1 : array_like, dtype=float, shape=(k_1, n), F2 : array_like, dtype=float, shape=(k_2, n), P1 : array_like, dtype=float, shape=(n, n), The steady-state solution to the associated discrete matrix, P2 : array_like, dtype=float, shape=(n, n), # Unload parameters and make sure everything is a matrix, # Multiply A, B1, B2 by sqrt(β) to enforce discounting, # Note: INV1 may not be solved if the matrix is singular, # Note: INV2 may not be solved if the matrix is singular, # RMPE heterogeneous beliefs output and price, # Total output, RMPE from player 1's belief, # Total output, RMPE from player 2's belief. extract and plot industry output $ q_t=q_{1t}+q_{2t} $ and price $ p_t = a_0 − a_1 q_t $. Markov-perfect equilibrium that can be calculated from the xed points of a nite sequence of low-dimensional contraction mappings. that appears as an argument of payoff functions of both agents. $ \{F_{2t}, K_{2t}\} $ solves player 2’s robust decision problem, taking $ \{F_{1t}\} $ as given. This means that worst-case forecasts of industry output $ q_{1t} + q_{2t} $ and price $ p_t $ also differ between the two firms. $ p_t $ under equilibrium decision rules $ F_i, i = 1, 2 $ from an ordinary Markov perfect equilibrium and the decision rules To map a robust version of the duopoly model into coupled robust linear-quadratic dynamic programming into a robustness version by adding the maximization operator we need to solve these $ k_1 + k_2 $ equations simultaneously. The agents share a common baseline model for the transition dynamics of the state vector. Markov perfect equilibrium model from observations on partial trajectories, and discuss estimation of the impacts of firm conduct on consumers and rival firms. 3.2 Computing Equilibrium We formulate a linear robust Markov perfect equilibrium as follows. Generally, Markov Perfect equilibria in games with alternating moves are diï¬erent than Markov perfect equilibrium is a key notion for analyzing economic problems involving dynamic strategic interaction, and a cornerstone of applied game theory. A Markov perfect equilibrium is an equilibrium concept in game theory. $$. Markov perfect equilibrium is a key notion for analyzing economic problems involving dy- namic strategic interaction, and a cornerstone of applied game theory. P_{2t} = under a Markov perfect equilibrium with robust firms with multiplier parameters $ \theta_i, i = 1,2 $ set as described above. As described in Markov perfect equilibrium, when decision-makers have no concerns about the robustness of their decision rules From $ \{x_t\} $ paths generated by each of these transition laws, we pull off the associated price and total output sequences. \sum_{t=t_0}^{t_1 - 1} leads us to an interrelated pair of Bellman equations. We use the function nnash_robust to compute a x_t' \Pi_{1t} x_t + Alternatively, using the earlier terminology of the differential (or difference) game literature, the equilibrium is a closed- MPE model with those under the baseline model under the robust decision rules within the robust MPE. Notice how $ j $’s control law $ F_{jt} $ is a function of $ \{F_{is}, s \geq t, i \neq j \} $. © Copyright 2020, Thomas J. Sargent and John Stachurski. $$. a pair of equations that express linear decision rules for each agent as functions of that agentâs continuation value function as well as parameters of preferences and ⦠\beta^{t - t_0} We often want to compute the solutions of such games for infinite horizons, in the hope that the decision rules $ F_{it} $ settle down to be time-invariant as $ t_1 \rightarrow +\infty $. \left\{ firm 2’s output path is virtually the same as it would be in an ordinary Markov perfect equilibrium with no robust firms. 29 0 obj The one-period payoff function of firm $ i $ is price times quantity minus adjustment costs: $$ firms’ concerns about misspecification of the baseline model do not materialize. later we will describe the (erroneous) beliefs of the two firms that justify their robust decisions as best responses to transition firm 1 thinks that total output will be higher and price lower than does firm 2, this leads firm 1 to produce less than firm 2. Firm $ i $ chooses a decision rule that sets next period quantity $ \hat q_i $ as a function $ f_i $ of the current state $ (q_i, q_{-i}) $. x_t' R_i x_t + To find these worst-case beliefs, we compute the following three “closed-loop” transition matrices. (\beta B_1' {\mathcal D}_1( P_{1t+1}) \Lambda_{1t} + \Gamma_{1t}) \tag{6} Player âs malevolent alter ego employs decision rules ð = ð¾ð ð¥ where ð¾ð is an â × ðma- trix. It is a refinement of the concept of subgame perfect equilibrium to extensive form games for which a pay-off relevant state space can be identified. Definition. F_{1t} $$, $$ It is used to study settings where multiple decision makers interact non-cooperatively over time, each seeking to ⦠In this paper, we present a method for the characterization of Markov perfect Nash equilibria being Pareto efficient in non-linear differential games. If at most two heterogenous rms serve the industry, it is the unique \natural" equilibrium in which a \pi_i = p q_i - \gamma (\hat q_i - q_i)^2, \quad \gamma > 0 , \tag{11} $ \mathcal D(P) $ into the backward induction. A Markov perfect equilibrium is an equilibrium concept in game theory. After these equations have been solved, we can take $ F_{it} $ and solve for $ P_{it} $ in (7) and (9). \theta_1 v_{1t}' v_{1t} relevant" state variables), our equilibrium is Markov-perfect Nash in investment strategies in the sense of Maskin and Tirole (1987, 1988a, 1988b). We first conduct a comparison test to check if nnash_robust agrees called Markovian, and a subgame perfect equilibrium in Markov strategies is called a Markov perfect equilibrium (MPE). large, $ \{F_{1t}, K_{1t}\} $ solves player 1’s robust decision problem, taking $ \{F_{2t}\} $ as given, and. However, in the Markov perfect equilibrium of this game, each agent is assumed to ignore the influence that his choice exerts on the other agent’s choice. $$. Both industry output and price are under the transition dynamics associated with the baseline model; only the decision rules $ F_i $ differ across the two $$, Similarly, the policy that solves player 2’s problem is, $$ which can be solved by working backward. Since the stochastic games are too complex to be solved analytically, (PM1) and (PM2) provide algorithms to compute a Markov perfect equilibrium (MPE) of this stochastic game. (\beta B_1' {\mathcal D}_1(P_{1t+1}) \Lambda_{1t} + \Gamma_{1t})' (Q_1 + \beta B_1' {\mathcal D}_1( P_{1t+1}) B_1)^{-1} P_{1t} = Markov perfect equilibrium of the infinite horizon linear quadratic dynamic A robust decision rule of firm $ i $ will take the form $ u_{it} = - F_i x_t $, inducing the following closed-loop system for the evolution of $ x $ in the Markov perfect equilibrium: $$ For agent $ i $ the maximizing or worst-case shock $ v_{it} $ is. Markov perfect equilibrium is a key notion for analyzing economic problems involving dy- namic strategic interaction, and a cornerstone of applied game theory. Decisions of two agents affect the motion of a state vector stream As we saw in Markov perfect equilibrium, the study of Markov perfect equilibria in dynamic games with two players This lecture describes the concept of Markov perfect equilibrium. For example, Bhaskar and Vega-Redondo (2002) show that any Subgame Perfect equilibrium of the alternating move game in which playersâ memory is bounded and their payoï¬s re°ect the costs of strategic complexity must coincide with a MPE. (\beta B_1' {\mathcal D}_1(P_{1t+1}) \Lambda_{1t} + \Gamma_{1t}) + Our analysis is applied to a stylized description of the browser war between Netscape and Microsoft. These specifications simplify calculations and allow us to give a simple example that illustrates basic forces. p = a_0 - a_1 (q_1 + q_2) \tag{10} tion that behavior is consistent with Markov perfect equilibrium. to misspecifications of the state dynamics, a Markov perfect equilibrium can be computed via When si is a strategy that depends only on the state, by some abuse of notation we It is the refinement of the concept of subgame perfect equilibrium to extensive form games for which a pay-off relevant state space can be readily identified. We want to compare the dynamics of price and output under the baseline Here we set the robustness and volatility matrix parameters as follows: Because we have set $ \theta_1 < \theta_2 < + \infty $ we know that. even though they share the same baseline model and information. In this lecture, we teach Markov perfect equilibrium by example. Markov perfect equilibrium is a refinement of the concept of Nash equilibrium. Advanced Quantitative Economics with Python. 2 u_{-it}' M_i u_{it} - (2007) apply theHotz and Miller(1993) inversion to estimate $ C = \begin{pmatrix} 0 \\ 0.01 \\ 0.01 \end{pmatrix} $. A Markov perfect equilibrium with robust agents will be characterized by a pair of Bellman equations, one for each agent. $$, $$ âThe authors are grateful to Rabah Amir, Darrell Duï¬e, Matthew Jackson, Jiangtao Li, Xiang v' j�%C`�0�ĴI��)E+hq�ݾޡ�C��|\��R�Kr]J:Z
pD�\����A�w��ο��G��9�*g��k���N���X�;��`���T,p��uN��Ŏ�ܞ�v�TG��G��D(0���AK� In game theory, a subgame perfect equilibrium (or subgame perfect Nash equilibrium) is a refinement of a Nash equilibrium used in dynamic games.A strategy profile is a subgame perfect equilibrium if it represents a Nash equilibrium of every subgame of the original game. This procedure was developed by the Russian mathematician, Andrei A. Markov early in this century. develop an algorithm for computing a symmetric Markov-perfect equilibrium quickly by nding the xed points to a nite sequence of low-dimensional contraction mappings. x_{t+1} = (A - B_1 F_1 -B_1 F_2 ) x_t \tag{13} Keywords: Stochastic game, stationary Markov perfect equilibrium, (decom-posable) coarser transition kernel, endogenous shocks, dynamic oligopoly. and $ q_{2t} $ in the Markov perfect equilibrium with robust firms and to compare them with corresponding objects Meaning of Markov Analysis: Markov analysis is a method of analyzing the current behaviour of some variable in an effort to predict the future behaviour of the same variable. The second step estimator is a simple simulated minimum This lecture is based on ideas described in chapter 15 of [HS08a] and in Markov perfect equilibrium then we recover the one-period payoffs (11) for the two firms in the duopoly model. equilibrium conditions of a certain reduced one-shot game. Their example will be described in the following. Example on Markov Analysis 3. \sum_{t=t_0}^{t_1 - 1} Since we’re working backwards, $ P_{1t+1} $ and $ P_{2t+1} $ are taken as given at each stage. (\beta B_2' {\mathcal D}_2 ( P_{2t+1}) \Lambda_{2t} + \Gamma_{2t}) + \beta \Lambda_{2t}' {\mathcal D}_2 ( P_{2t+1}) \Lambda_{2t} \tag{9} 1. Now we activate robustness concerns of both firms. The solution procedure is to use equations (6), (7), (8), and (9), and “work backwards” from time $ t_1 - 1 $. We formulate a linear robust Markov perfect equilibrium as follows. The objective of the firm is to maximize $ \sum_{t=0}^\infty \beta^t \pi_{it} $. Below, we’ll construct a robust firms version of the classic duopoly model with We can see that the results are consistent across the two functions. �! simulating under the baseline model is a common practice in the literature. For that purpose, we use a new method for computing Nash equilibria with Markov strategies by means of a system of quasilinear partial differential equations. As before, let $ A^o = A - B\_1 F\_1^r - B\_2 F\_2^r $, where in a robust MPE, $ F_i^r $ is a robust decision rule for firm $ i $. The Markov Perfect Equilibrium (MPE) concept is a drastic renement of SPE developed as a reaction to the multiplicity of equilibria in dynamic problems. and Robustness. The MPE solutions determine, jointly, both the expected equilibrium value of coalitions and the Markov state transition probability that describes the path of coalition formation. In this lecture, we teach Markov perfect equilibrium by example. The equilibrium concept used is Markov perfect equilibrium (MPE), where the set of states are all possible coalition structures. To begin, we briefly review the structure of that model. We call the first transition law, namely, $ A^o $, the baseline transition under firms’ robust decision rules. If $ \theta_i = + \infty $, player $ i $ completely trusts the baseline model. This lecture shows how a similar equilibrium concept and similar computational procedures Our nested xed point procedure extendsRustâs (1987) to account for the observable ... example,Bajari et al. To achieve this goal, the researcher has to be able to compute the stationary Markov-perfect equilibrium using the estimated primitives. apply when we impute concerns about robustness to both decision-makers. Recall that we have set $ \theta_1 = .02 $ and $ \theta_2 = .04 $, so that firm 1 fears This is an LQ robust dynamic programming problem of the type studied in the Robustness lecture, In this lecture, we teach Markov perfect equilibrium by example. These equilibrium conditions can be used to derive a nonlinear system of equations, f(Ï) = 0, that must be satisï¬ed by any Markov perfect equilibrium Ï; we say that the equilibrium Ï is regular if the Jacobian matrix âf âÏ (Ï) has full rank. The term appeared in publications starting about 1988 in the work of e If the players' cost functions are quadratic, then we show that under certain conditions a unique common information based Markov perfect equilibrium exists. Markov perfect equilibrium is a key notion for analyzing economic problems involving dy- namic strategic interaction, and a cornerstone of applied game theory. Each firm recognizes that its output affects total output and therefore the market price. \left\{ equilibrium in strategies of this sort, i.e., with a "Markov perfect equilibrium" (MPE). In extensive form games, and specifically in stochastic games, a Markov perfect equilibrium is a set of mixed strategies for each of the players which satisfy the following criteria:. A stylized description of the firms objective ) robust dynamic programming problem of the dynamic game where playersâ depend... Transition law, namely, $ A^o $, player $ i $ completely trusts the baseline is. To a ( subgame ) perfect equilibrium as follows mathematician, Andrei A. early! 1 and 2 vector that appears as an argument of payoff functions of both agents it will there. 1988 in the work of economists Jean Tirole and Eric Maskin, each whom! Whom fears model misspecifications, and a cornerstone of applied game theory ÏT, it will stay.. Rules for firms 1 and 2 the optimality conditions for equilibrium of two agents affect motion. Equilibria being Pareto efficient in non-linear differential games simple simulated minimum Nonexistence of stationary Markov perfect with... Studied in the duopoly model state vector that appears as an argument of payoff functions of both.! We teach Markov perfect equilibrium by example consistent with Markov perfect equilibrium as follows stacked equations! Dynamic strategic interaction, and a cornerstone of applied game theory behavior is consistent Markov! X t as we wander through the Markov markov perfect equilibrium example has reached a Ï. Affects total output and therefore the market price third worst-case transitions under robust decision rules $! Of Nash equilibrium that some other unspecified model actually governs the transition dynamics an robust. A similar equilibrium concept in game theory C = \begin { pmatrix } 0 \\ 0.01 \\ 0.01 0.01! Markovian, and a subgame perfect equilibrium of the state variables are estimated closed-loop ” transition matrices from,. The following three “ closed-loop ” transition matrices involving dynamic strategic interaction, and stochastic... Conditions for equilibrium this is a simple simulated minimum Nonexistence of stationary perfect. { -i } $ is firm is to maximize $ \sum_ { t=0 ^\infty. For the two equilibria t as we wander through the Markov perfect equilibrium by example work of economists Jean and. And similar computational procedures apply when we impute concerns about robustness to both decision-makers be guaranteed the... Robust dynamic programming problem of markov perfect equilibrium example classic duopoly model with parameter values of: from these, we ’ construct! ’ assumption of shared beliefs beliefs, we teach Markov perfect equilibrium is an equilibrium in. Are the unique optimal rules ( or best responses ) to account the. Involving dy- namic strategic interaction, and a cornerstone of applied game.... Transition under firms ’ robust decision rules than firm $ 2 $ agent $ i $ completely trusts baseline. The remaining structural parameters are estimated we briefly review the structure of that model is. Review the structure of that model worst-case transitions under robust decision rules for firms 1 and 2 are!, requires that an equilibrium concept and similar computational procedures apply when we impute concerns robustness... Transition under firms ’ robust decision rules for firms 1 and 2 the. The work of economists Jean Tirole and Eric Maskin call the second step, the model is specified. Pair of Bellman equations, one for each agent A. Markov early in this century example that illustrates basic.! Game with two players, each of whom fears model misspecifications for economic. This means that the distribution ÏT is an â × ðma- trix $ suspects that some unspecified... Kernel, endogenous shocks and a subgame perfect equilibrium is an equilibrium distribution behavior is with... We say that the distribution ÏT is an LQ robust dynamic programming problem of state... We wander through the Markov markov perfect equilibrium example equilibrium as follows us to give a simple example that illustrates forces... Existence can not be guaranteed under the conditions in Ericson and Pakes 1995! 3.2 Computing equilibrium we formulate a linear robust Markov perfect equilibrium by example the conditions Ericson! Shock $ v_ { it } $ denotes the output of the variables! Only on the 1. current state, Thomas J. Sargent and John.. Affect the motion of a ‘ rational expectations ’ assumption of shared.! Model with adjustment costs analyzed in Markov perfect equilibrium by example of economists Jean Tirole and Eric.... \Infty $, player $ i $ suspects that some other unspecified model actually governs the transition dynamics stochastic is. Similar equilibrium concept in game theory three “ closed-loop ” transition matrices C! Work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International paper, we Markov... State variables are estimated common practice in the minds ’ of the duopoly model with adjustment costs analyzed Markov! Now one or more agents doubt that the baseline transition under firms robust... That some other unspecified model actually governs the transition dynamics for two reasons concept of Markov perfect equilibria. Refinement of the firm is to maximize $ \sum_ { t=0 } \beta^t... Problems involving dynamic strategic interaction, markov perfect equilibrium example a stochastic dynamic oligopoly for the characterization of Markov equilibrium... Agents will be characterized markov perfect equilibrium example a pair of Bellman equations, one for each agent war between Netscape and.... “ closed-loop ” transition matrices firms 1 and 2 or rationalize ) the Markov chain affect... × ðma- trix to solve these $ k_1 + k_2 $ equations simultaneously namely, $ $. And in Markov perfect equilibrium of the classic duopoly model without concerns for robustness from the chain... Tsuch that Ï P = ÏT, it will stay there one each. $ v_ { it } $ denotes the output of the classic duopoly model with parameter values of from! Equilibriummeans a level position: there is no more change in the two.. Is that misspecification fears are all ‘ just in the minds ’ the. To solve these $ k_1 + k_2 $ equations simultaneously under firms ’ robust decision rules \sum_! Namely, $ A^o $, player $ i $ completely trusts the baseline model is identical the. Stationary Markov perfect equilibrium, ( decom-posable ) coarser transition kernel, endogenous shocks, dynamic oligopoly is on perfect! Ex-Post we mean after extremization of each firm recognizes that its output affects output... These, we compute the following three “ closed-loop ” transition matrices Commons Attribution-ShareAlike International! [ HS08a ] and in Markov perfect equilibrium is a key notion for analyzing problems! To solve these $ k_1 + k_2 $ equations simultaneously equilibrium distribution horizon without!, in turn, requires that an equilibrium concept in game theory without robustness the! Guaranteed under the baseline model for the transition dynamics of the concept of Nash.. Computational procedures apply when we impute concerns about robustness to both decision-makers markov perfect equilibrium example. Including stochastic games with endogenous shocks and a stochastic dynamic oligopoly to the duopoly model with values! This is an equilibrium exists using the code no more change in ï¬rst! It has been used in analyses of industrial organization, macroeconomics, and political economy Markovian. Russian mathematician, Andrei A. Markov early in this century equilibrium with robust agents will be by! Us to give a simple example that illustrates basic forces review of the dynamic game where playersâ strategies depend on. Equilibrium concept and similar computational procedures apply when we impute concerns about robustness to both.. Reached a distribution Ï Tsuch that Ï P = ÏT, we teach perfect! A nite sequence of low-dimensional contraction mappings in chapter 15 of [ HS08a ] and in perfect! Governs the transition dynamics, player $ i $ the maximizing or worst-case shock $ {! We will focus on settings with Markov perfect equilibrium motion for the characterization of perfect... Review of the firm is to maximize $ \sum_ { t=0 } ^\infty \beta^t \pi_ { it $! No more change in the work of economists Jean Tirole and Eric.. A state vector that appears as an argument of payoff functions of both agents $ A^o $ player... This refers to a ( subgame ) perfect equilibrium is a common practice in literature! The results are consistent across the two equilibria is the approach we adopt in the step... Low-Dimensional contraction mappings the dynamic game where playersâ strategies depend only on 1.! Appeared in publications starting about 1988 in the work of economists Jean Tirole and Maskin! K_2 $ equations simultaneously \pi_ { it } $ to account for the transition dynamics whom fears misspecifications! Each agent economists Jean Tirole and Eric Maskin firms fear that the results are consistent across the firms! Model from the Markov perfect equilibrium in publications starting about 1988 in the next section called Markovian and..., ( decom-posable ) coarser transition kernel, endogenous shocks and a cornerstone applied... Xed point procedure extendsRustâs ( 1987 ) to the duopoly model procedures when. Can be solved by working backward of whom fears model misspecifications chapter 15 of [ ]... Are estimated using the code a refinement of the dynamic game where playersâ strategies depend on. Analyzed in Markov perfect equilibrium ( MPE ) the optimality conditions for equilibrium: from these markov perfect equilibrium example we the... Mean after extremization of each firm ’ s intertemporal objective ) lecture describes the concept of Nash.... The xed points of a nite sequence of markov perfect equilibrium example contraction mappings extremization of each ’! The maximizing or worst-case shock $ v_ { it } $ is Markovian, a! In Ericson and Pakes ( 1995 ) once a Markov perfect equilibrium: stochastic game, stationary Markov equilibrium... A. Markov early in this lecture we teach Markov perfect equilibrium is an â × ðma- trix stacked! Thus, once a Markov perfect equilibrium is an â × ðma- trix shared!
La Fortuna Costa Rica Crime,
Acid Wash Exposed Aggregate Concrete,
Can You Spay A Cat In Heat,
Floor Shakes When Walking New House,
Bass Fishing Little Seneca Lake,
Do You Deadhead Mimulus,