Economics of Risk and Uncertainty- implications for political economy
Political and economic choices are made under conditions of risk and uncertainty. People don’t know their future and it is subject to random events, for example weather. Furthermore, there are differences among individuals and groups that reflect heterogeneity. Both randomness and heterogeneity require that policies and regulations be contingent on the state of nature as well as the individuals affected. Therefore, decision theory and economics have emphasized choices that operate under uncertainty and risk. We need to understand some of the basic theories and their implications, and then discuss how they apply to political economy considerations.
The evolution of the economics of risk
Evolution of statistics
Analysis of risk has evolved over time. This evolution was affected by basic realization that risk exists conceptualizing it and developing the basic tools. Two main tools to analyze risk are probability and statistics. Probability introduces the notion of random variables, distributions and moments. For instance, we have Bernoulli distribution, normal distribution, and others and each was a major advance in mathematics occurring over the past few hundred years. People like Bernoulli and Gauss were geniuses that changed science forever. Statistics is a technique that relies on the basic concept of probability to estimate key parameters. The main objects of statistics are abstract, and we can only estimate them. The main activities of statistics are estimation of statistical variables (which are parameters of statistical distribution like mean and variance and explanatory coefficients in regressions), computing significance intervals, and prediction.
While probability is based on mathematical analysis, statistics is based on processing of data. While in academia we collect data rigorously, every individual is a statistician by collecting data and analyzing it throughout their life. People observe others’ behavior. And if you are a political candidate, voters observe your behavior and estimate whether you are ‘good’ or ‘bad’. People may not adhere to a rigorous statistical procedure, but they are all the time estimating, and behave (e.g. voting) according to their estimation. There are two different major approaches to statistics. First, neoclassical statistics is the one we use most often in school. We develop hypotheses based on theory, or intuition, and then collect data and test the hypothesis. Second is Bayesian statistics, which assumes that people have prior beliefs about different states of the world. These beliefs encounter data and then the combination of the prior beliefs and the observed data result in posterior beliefs. Bayesian analysis is very important in decision theory and is a realistic way to look at decision making.
Evolution of economics
Economics, for quite some time, assumed that people had the right data and performed calculations correctly (i.e. people know mathematics and statistics). This is what we mean when we say rational agents. Agents know prices, have clear mathematical objective functions and make decisions that result in supply, demand, etc. In the face of uncertainty, agents know the distribution of outcomes, and calculations are performed in the correct way. Of course, economists realized that in reality people don’t operate this way, but even so economists assumed that people behaved ‘as if’ they calculated everything in the right way. So, economic models were supposed to predict average behavior with little systematic bias (i.e. errors). Behavioral economists (it would be great to read Richard Thaler’s entertaining book, Misbehaving) identified systematic biases and led to new theories that affect risk.
Traditional economic models assume that people are homogenous. But, over time, it has become clear that there are differences and that heterogeneity matters. When it comes to decision making, some people have better capacity to perform the right calculations, while others are less capable. This may result in different choices with respect to risk. Similarly, some people have more tolerance to risk than others. Finally, peoples’ histories may affect their choices.
Now we will proceed to some of the evolution of the economics of risk and its implications for political economy.
Risk and Interest Rates
The first major incorporation of risk in economic analysis was done by Keynes, who in the 1930s developed the notion of Risk Premium. To explain this notion. Assume you take a loan of $100, with 10% interest, today for one year. At the end of the year, you will owe $110. Suppose there is a 10% chance that you won’t be able to pay back the loan and interest. The lender will expect to receive
- X with probability 90%
- 0 with probability 10%
Assuming that the lender is risk neutral, his expected payout is:
- so X=$110/0.9, which is $122.22
This means that he will charge an interest rate of 22.22%. The difference between this rate and the 10% risk free interest rate Keynes referred to as a risk premium. In this case, the lender is risk neutral, but if the lender wants compensation for the anguish associated for the risk, he may ask for a higher rate, say 25%. This additional difference is the payment for the extra cost of the potential loss due to non-payment. It is important that the notion of risk premium developed by Keynes is different than the risk premium we will define later on.
In reality, differences in interest rates reflect differences of the loan’s purpose and reliability of the borrower. Countries are major borrowers, from banks as well as the World Bank, and differences in ‘country risk’ are reflected in interest rates they are asked to pay. Rating agencies rank national bonds and this ranking is reflected in the interest rate charged by lenders. Countries with higher probability of bankruptcy pay higher interest rates. Countries with political instability are likely to be assessed a higher risk premium. For example, during the recent financial crisis, countries with worse credit situations, like Greece or Spain, faced higher interest rates than strong credit, like Germany. In the 1980s and 90s, several Latin American countries were in tough financial shape, and were not able to repay their debts in full, which led to rescheduling of loans and country- and bank-level bankruptcies. One major role of the International Monetary Fund is to help countries deal with financial crisis and recovery. Generally, the fund may help countries pay loans, but in return they need to modify their economic policy with reduced spending and engage in other activities to improve financial situation.
Distinction between Risk Ambiguity Uncertainty
Frank Knight in the 1930s distinguished between risk and uncertainty. Risk occurs when you have a random outcome with some known probability. Uncertainty occurs when you don’t know the probability of each outcome. Recently, we have a new concept called ambiguity when you know the order of magnitude of probability of each outcome.
One way to address uncertainty and reduce ambiguity is to develop sets of decision rules that will determine what to do given different constraints. The legal system, to a large extent, is establishing such rules. For example, what is the penalty if a driver runs a red light, or if the driver runs over a pedestrian. Much of the political system is devoted to develop such rules, and different groups may strive to establish rules that fit them better. For example, doctors may use their economic muscle and political influence to reduce their malpractice liability while consumer groups will operate in the opposite direction. Establishment of liability rules that determine assignment of responsibility and penalties/awards in cases of mishap, is a major legislative activity and is affected by political economic considerations. If, for example, regulators of car safety is ‘captured’ by car manufacturers, penalties for mal-functioning will be lighter than if consumers have larger influence
Probably one of the most important developments in the economics of risk was game theory developed by Von Neumann and Morgenstern (1944). Expected utility became the work horse of decision making under risk. Let’s assume that we have a random variable, X (income) with I outcomes and the indicator i assumes values from 1 to I. The probability of the ith is . The expected outcome is
Friedman and Savage suggested that when it comes to small risks, people are risk loving and they may lose money on average, but will take a small gamble. This may explain high participation in the lottery. On the other hand, when stakes are high, people are risk averse and will pay to lower exposure to risk. Most of the literature concentrates on risk aversion cases.
We assume that the decision maker has a utility function . We assume that utility is increasing with X but the marginal utility declines (. The expected utility is . Many scholars use the expected utility to develop important concepts. One is the notion of risk premium that is in common use today. To illustrate this concept, Table 1 presents a random variable with 3 values, eachwith a probability of 1/3. The mean is 5. We assume that the utility function is . And the expected utility is 2.08. The expected is measured in utils. The value of the expected utility in dollar terms is called the certainty equivalent, and it is equal to because .
The difference between receiving the mean outcome and the certainty equivalent (5-4.32=.68) is the risk premium, and this is the amount that a risk averse individual will be ready to pay for avoiding the risk associated with the random outcome and instead getting the mean outcome.
One of the first applications of expected utility is in insurance. The risk premium is the amount that someone will pay to reducing exposure to risk. If you know that your income is a random variable , you will be ready to pay a premium up to $0.68 in order to get the mean value, $5, every period instead of being exposed to risk (i.e. having an equal chance of receiving $1, $5, or $9). People have different degrees of risk aversion. Generally, people who are risk neutral serve as insurers to people who are more risk averse. A larger organization, like the government, tends to be able to absorb more risk, especially when they are uncorrelated, and therefore can provide insurance to individuals who are more vulnerable to risk and are risk averse.
One measure of risk is variance, It is not a perfect measure of risk, but it easy to calculate and gives a good approximation under various conditions. The mean-variance approach assigns a price, , per unit of variance and thus the value of a risky asset is approximated to be .
The parameter r is a measure of risk aversion. Much of the economic literature about decision-making under uncertainty aims to distinguish between decision-makers based on their risk aversion and alternative choices based on their riskiness. There are several measures of risk aversion. The measure of absolute risk aversion is an indicator of how much certain income decision-makers will forego to reduce one unit of risk. The measure of relative risk aversion indicates what percentage of their assets an individual will give up to reduce the risk they face by one percent. It is widely assumed that absolute risk aversion declines with wealth, and therefore those more well off will be more willing to take specific risks. This has significant implications because if richer people are more likely to take on projects with higher expected profit and more risk, then, on average, they will become richer in the long-run. Because the government is generally the biggest entity in most economies, it has much lower coefficients of risk aversion and therefore may take risky projects at lower costs.
In many cases decision makers must make a choice among several random variables. For example, a farmer must allocate a given amount of land among crops or a portfolio manager must determine the share of different assets in a portfolio. In these cases, the decision maker maximizes the expected utility of the portfolio. Let’s consider the case of a farmer that must decide between two crops. He has acres and allocates to crop 1 and to crop 2. We know that . Now let’s assume that the profit per acre of crop 1, denoted by and is distributed normally with mean of and standard deviation of . The variance is the standard deviation squared. And similarly that the profit per acre of crop 2, denoted by and is distributed normally with mean of and standard deviation of (and variance of ). The interesting point is that the profits of the two crops may be correlated. The correlation coefficient is and it is equal to the covariance of the two profits divided by the product of the two standard deviations.
If a farmer aims to allocate land to maximize expected profit minus cost of risk, it will be in our case
subject to and .
By the definition of variance, =
Using this definition, one can develop a rule to allocate land among crops (Just and Zilberman 1983). If all the land is utilized,
Suppose that crop 1 is more profitable and risky than crop 2. Then, and . The area for the risky crop is increasing with difference in expected profit per acre, but is declining as the measure of risk aversion (r) is increasing. It is also declining as the variances of the two crops are increasing, but increasing as the correlation is lower. This means that risk aversion reduces the acreage of risky crops, but when profits of the two crops are less correlated, there is more possibility to absorb risk.
We can translate these results to understand what explains the land share of crop 1, . It is increasing the higher is its mean profit, the lower its variance, and the smaller the correlation of profits between the two crops. Since the risk measured by variance grows by acreage squared, there is a limit to how much acreage can be allocated without the cost of variance overcoming the profit. However, when there are two crops and their variance is negatively correlated, then their risk partially cancels one another, and that will allow expansion of production. Therefore, there are farmers that may produce a less profitable crop with a more profitable one if their profits are negatively correlated in order to address risk consideration. Note that individuals who are risk neutral will not use this approach and diversify their portfolio to address risk consideration.
The expected utility has been used in wealth management. It assumes that utility is derived from overall wealth of the individuals and risk aversion is associated with the variance of the wealth. This means that all activities are considered, when it comes to the risk impacts, in an integrated manner, not one at a time. For example, if an individual starts with a very safe portfolio, and considers a new risky choice, she is more likely to take it than another individual that starts with a riskier portfolio because the net effect of the new choice on risk is much more significant for the individual with an initially riskier portfolio. Therefore risk managers like to balance an overall portfolio between asset classes with various risk profiles (e.g. stocks, bonds, cash, etc).
The expected utility approach has been used to explain the behavior of various types of economic agents. It suggests that when risk averse agents face risky choices they will engage in these choices less often than risk neutral agents. For example, a risk averse farmer facing uncertainty will apply less fertilizer than a risk neutral farmer. The reason is that more fertilizer increases costs but because of uncertainty of yield, the expected utility of benefit from yield is lower. Some inputs, like pesticides, are risk reducing – so a risk averse farmer may apply more pesticide than a risk neutral farmer.
Politicians may be risk averse, like anyone else, and therefore their choices may be affected both by expected benefit of a certain action as well as the risk associated with the action. More risk averse politicians, may be less likely to engage in risky initiatives that may on average improve economic well-being, but may also backfire. Similarly, voters may be risk averse and may prefer to vote for candidates that are known entities with a low degree of uncertainty compared to other candidates that may be more promising but are more unknown.
Prospect Theory and the paradoxes
Recently, scientists have realized that many of the predictions of expected utility don’t fit reality. The expected utility approach is more normative and tells you what a rationale person should do, rather than behavioral. It doesn’t address, for example, fear. An alternative approach is called the Prospect Theory (Kahnemann and Tversky 1979) and it has three elements. The first is framing where risky choices are simplified and considered one at a time. This means that while it may not be rationale to do so, many people will take travel insurance or car insurance (even when its not required) rather than overall portfolio insurance. The second element is that instead of the utility function, we have a value function that is defined as V(X), but where X is gain associated with an activity (not wealth). The marginal value is positive and decreasing for positive gains. However, when it comes to losses, there is an initial big reduction in value around 0 (when we transition from a gain to loss) but then the marginal reduction in value is smaller. The basic point is that people really don’t like to lose, but once they lose, the magnitude of the loss matters less. This idea is presented in Figure 1.
A third element of prospect theory is probability weighting function. Individuals tend to overestimate small probabilities and underestimate big ones. There is a weighting function, let’s call it , where when P is close to 0 and when P is smaller than 1. Furthermore, when P is uncertain, the sum of the weights is smaller than the sum of the true probability: . This weighting function suggests that consumers give extra weight to certainty, and once we have risky outcomes, there is a loss that is . The weighting function is presented in Figure 2.
Once we are familiar with this notation, the prospect theory suggests a formula to value risky prospects, which is
In a new book, Kahnemann (2011) distinguished between elaborate, slow decision making that is used to address complex issues and requires much energy versus instinctive, immediate decisions that are done for most day-to-day choices. In a way, individuals take into account the mental cost of their efforts, and when it comes to risky choices that are not complex, people will use simple algorithms to determine their choice. For example, if an individual must choose between two similar alternatives, they may use the one with the highest expected reward. But if the alternatives are dissimilar, they will use more complex algorithms, similar to expected utility. The reliance on simple decision rules for similar alternatives and more complex rules for dissimilar ones may result in choices that seem inconsistent logically.
Prospect theory is a cornerstone of behavioral economics, which assumes that people are not fully rational and their behavior is affected by cognitive limitations. Behavioral economics suggests that the behavior predicted by standard economic models reflect an idealized agent, ‘econs’, rather than humans. There aren’t many applications of behavioral economics to political economy, but obviously politicians are as human as anyone else and their choices are affected by their cognitive limitations, as well as those limitations by the public. Thus the negotiation between different parties reflects the limitations of policy makers and interest groups in assessing the impact of policy outcomes and their likelihood. For example, loss aversion may lead to preference of policies that prevent loss rather than those that may result in gain. The instinct to preserve is sometimes stronger than the one to adapt and change. Frequently, there are many pieces of evidence that people change policies and institutions after crisis occurs, rather than beforehand – and that crises are the biggest trigger for reform. For example, introduction of water trading and water use reform most often occur during periods of drought rather than in anticipation of drought. Infrastructure investments are frequently delayed because they require expenditures that are viewed as ‘losses’ by groups that pay for the expenditure, such as taxpayers, but are introduced after a disaster occurs. Effective politicians may get things done by being alarmist. For example, the US government overestimated the strength of the Soviet Union in order to garner more resources for defense. Another element of prospect theory that may operate in the other direction is the tendency to overestimate small probabilities. Proponents of technologies or projects that have low probability to succeed in the long run, may be able to push their projects, especially if they provide a solution to major problem that results in big losses. For example, most countries have been much more generous in financing research to address cancer treatments that have low probability of success than more modest projects with higher expected value for society.
Another example from prospect theory is on the importance of framing. Policymakers would like to reduce choices that may be quite complex to their core essentials. And policymakers that are able to present choices in a starker framework, are more likely to be successful than ones that present a nuanced view. The basic point is that there is much potential in applying prospect theory in addressing political economy challenges.
Prospect theory is importing concepts from psychology to economics. Another important area of psychology that is starting to affect economics is the notion of Perceived Self-Efficacy. It is based on the recognition that individuals are different and their ability to perform tasks is partly based on the belief in their own abilities, and this belief depends both on their intrinsic abilities but also on social norms, legal systems, others’ expectations, etc.
Prospect theory raises some very important considerations that the probabilities seen by decision-makers may not be the same as the “true” probabilities. But still, it is based on experiments where people are given probabilities, but their behavior suggests that they incorporate them in a biased manner. But knowing what probabilities people use and how they change over time is a different line of research. We can hardly know true probabilities of real life events, but statistics can obtain reliable estimates (in principle, the best possible estimate), and they can be referred to as objective probability. But individual decision makers have their own subjective probabilities. Sometimes they are identical to objective probabilities, and sometimes they are not. One of the most important elements of political research is to understand the evolution of subjective probabilities and what can affect them. Campaigning, for example, is based on influencing subjective probability. Since candidates are elected to address unknown or uncertain future phenomenon, the subjective probability that voters have about their ability to deal with challenges is a major influence on their vote, and campaigning provides the opportunity to educate voters about candidates, and their behavior provide evidence that allow candidates to change their opinion.
Let’s start with a basic theory of learning that mathematicians developed and look at how it evolved. The key for this theory is the Bayesian equation. A decision maker has a probability distribution about a key variable essential for decision. Denote this variable as A. For example, whether a politician is crooked or not. When denotes honesty, then A=1 when politician is honest, and 0 if dishonest. Every decision maker has a prior distribution on this decision variable. So let be the prior distribution of honesty. Another variable is B, which provides some evidence relevant to A. For example, B can be whether the politician is rich or poor. We will say that the politician is rich by B=1 and poor when B=0. The third element is the conditional distribution of B given A, which is denoted as , which in our case is the probability is rich or poor given he is honest or dishonest.
Now, let’s suppose that the voter had a prior that the politician is honest with probability 0.8, which means and . Now suppose that the voter learns that the politician is rich. This new information will result in updating the belief about the politician. The updated belief is called the posterior distribution and in our case, it is . The Bayesian formula is used to compute the posterior distribution based on the prior distribution, the joint distribution of the belief and the data . In our case,
-probability that the politician is rich given that he is honest is 0.5
-probability that the politician is poor given he is honest is 0.5
-probability that the politician is rich given he is dishonest is 0.7
-probability that the politician is poor given he is dishonest is 0.3
The Bayes formula in abstract is
To calculate the posterior distribution that the politician is honest given he is rich we have to apply the formula which basically states that the posterior probability that the individual is honest given that he is rich is the probability that he is honest and rich, which is the numerator, , divided by the probability that he is rich, . In particular, we need to calculate . To compute the numerator, we know that and , which means that the probability candidate is honest and rich is . Now we are challenged to compute the denominator, , the probability the candidate is rich. The probability that the candidate is rich is equal to the probability that he is rich and honest plus the probability that the candidate is rich and dishonest. The first term (rich and honest) is the numerator equal to 0.4. The second term (rich and dishonest) is and , which is 0.14. Thus, . And therefore,
So this new evidence changes the belief from 0.8 to 0.74. Now, let’s suppose that the probability that the politician is rich given he is honest is 0.1, and the probability that he is rich given he is dishonest is 0.8. In this case,
This suggests that evidence can totally change your perception of a candidate. Obviously, the Bayes theorem is based on the assumption that decision makers have the priors, know conditional probabilities, and then compute the posteriors. But the basic philosophy that evidence changes perception is key for campaigning. So campaigning provides either positive or negative information about other candidates, and every piece of new information may change peoples’ perception. A candidate may start with a good reputation, but the opposition may destroy it through negative news.
The Bayesian formula may represent an ideal way to change perspectives given one has solid conditional distribution, but in reality things are different. People may not have sound conditional distributions and people may give different weight to different evidence. There is a large literature on behavior and judgement that includes how evidence is incorporated to judgement (Einhorn and Hogarth 1981). One interesting phenomenon is called recency where new evidence is weighted more heavily, especially when the recent evidence is consistent with itself. For example, if a candidate that you considered good had a streak of bad performances, then you change your mind. Primacy occurs when you ignore recent evidence. It is more likely to occur when the recent evidence conflicts within itself and you return to past beliefs. The Bayesian model assumes that posterior beliefs are proportional to the product of both priors and evidence. The Bayesian model has been generalized where the weights of the priors and evidence may be different.
Subjective and objective probability – Human capital and Self-efficacy
The prior probabilities held by individuals and the mechanisms through which they update them may vary by individuals. Some may have stronger quantitative sense and their updating of probabilities can be performed close to the Bayesian model, while others operate using different procedures. So subjective probability varies among individuals, and there is a significant difference both in beliefs of people as well as how these beliefs are influenced by evidence.
In this regard, understanding the role and structure of the media really matters. When you have a strong national media in which everyone is viewing the same evidence has different outcomes when there is polarized media and people can select the information they are exposed to. One of the big issues that has emerged in the US is the notion of alternative facts, where people allow their priors to determine what types of media they view, and this determines their posteriors. To some extent, the ability to update information and change priors is much more limited, and changing the perspective of people from opposite groups becomes a big challenge. In many countries, the government aims to control the media in order to provide new information in a way that will be favorable to the government and result in more favorable updating of priors.
When different groups of the public receive information from segregated sources, with less common information, the situation can become volatile. On one hand, differences in belief and knowledge may result in contradicting policy ideas, reduced capacity for compromise, political stalemate, or general discontent by policies created by others. At the same time, if reality is changing and contradicts beliefs of different groups, a crisis may emerge. For example, the fall of the Soviet Union and the reform in China, to a large extent, was a result of people responding to a reality contradictory to the official information and propaganda. More drastically, losses in war that shatter a lot of dreams result in drastic regime changes.
While much of economics assumes homogeneity and full information, there is a great recognition that heterogeneity is a key factor that explains phenomenon like technology adoption and differences in performance and income distribution. There is much work to be done to understand subjective probability – what type of information people know – but the literature on human capital is a good start. Schultz (1975) argues that a key component of human capital is allocative ability –to make decisions in the context of markets – which relates directly to ability to obtain and process information. He suggests that education is a key component that enhances allocative ability, but there are obviously differences between individuals which result in different outcomes. The literature on human capital looks at it as a dynamic stock that grows through education and knowledge accumulation, and depreciating with time. Part of this build-up of knowledge is accumulation of information which individuals process to generate their beliefs. There is a growing recognition that people seek different types of information for decision making. For example, Wolf et al (2001) identify that actors (be it farmers, insurance salespeople, producers, etc) rely both on informal sources (gossip, conversations, etc) and formal sources (publications, media, etc) to make choices. More educated individuals, as well as larger decision making units, may use primary data and analyze it to obtain their own predictions. In other situations, decision makers rely fully on predictions from consultants, or even hire consultants to make decisions. The bottom line is that different sources of information contribute to form subjective probabilities, and this is one area we need to understand better.
One key element of subjective probability is self-perception. There is growing literature in marketing that a key consideration in adoption of a product is whether it fits to the individual. This idea of fit also applies to estimation of outcomes. If you know that you are a slow typist, then you’ll take official predictions on the effectiveness of new computers with a grain of salt, and may thus stick to your pencil. So, the notion of subjective probability should include how people adjust estimations to self-perception. Psychologists introduced the notion of perceived self-efficacy as a major determinant in their decision-making process and actions. The popular notions of self-esteem and the recognition that your socio-economic background affects your aspirations and risk-taking is originated with the notion of self-efficacy. There is evidence that background and other factors that affect perceived self-efficacy affect technology adoption (Wüpper and Lybbert 2017). It may be also useful to look at perceived self-efficacy as a key element of human capital that may be updated by experiences, as well as policies and new information. The main channels through which this self-perception affect choices in through subjective probability. For example, the emergence of role models or mentors who build others’ self-confidence can affect peoples’ subjective probability, namely their belief in their ability to accomplish tasks and goals.
The notion of perceived self-efficacy has incredible political implications. As we mentioned before, it is behind programs that provide additional support to enhance the performance of disadvantaged or marginalized groups. While these programs will help them make better decisions, they may also make them more active politically. From a political economy perspective, it is clear that the political parties less likely to be supported by the empowered group will not be very enthusiastic to support such programs.
Asymmetric information and Principal-agent problems
One of the most important developments in the economics of risk was the notion of asymmetric information. The notion of asymmetric information was introduced as recognition of the importance increased of heterogeneity in economics. Asymmetric information may occur because some parties have more information than others, or some items have different quality than others. Many of the concepts were introduced by Kenneth Arrow in his work on medical insurance as well as insurance in general. The basic insight is that in many transactions, both parties don’t have the same information. The party with better information may take advantage of the knowledge, and the party that knows they are lacking knowledge, may modify their behavior so they won’t be taken advantage of.
One case is of adverse selection where a buyer is not sure about the quality of the product they acquire. The classic example is the used car market (lemons). In this case, buyers may be suspicious and pay below the average price of the used car. In order to address these problems, people developed several mechanisms. They include screening (e.g. pre-testing of cars or candidates) and signaling of quality (e.g. degrees from universities). Obviously, adverse selection is important in politics. Political debates are one form of screening. Name recognition and reputation are very important signals that help people to be elected.
Another case of asymmetric information is of moral hazard, such as in insurance where people who take-out insurance policies may take more risk because they are more protected. The insurer may be more aware about this possible behavior and therefore adjust their rates accordingly, or develop mechanisms like deductibles or rate increases after claims to dis-incentivize bad behavior.
When politicians are being elected, many of their activities are hidden from the public. They can engage in many activities that may serve them but may not be ideal from the public perspective. To some extent, their actions are subject to moral hazard. The principal-agent problem is used to reflect a situation where an agent (e.g. government) is making decisions on behalf of its principals (e.g. the public). The key point is that many of the decisions of the agents are not observed by the principals. For example, the agent, say a ruler of a developing country, can give a right to an oil company to drill for oil in the country and in return allocate a large share of money to an offshore account. In the principal-agent problem, the principals develop tools that aim to maximize social welfare and control abuse by agents.
Mathematically, this process is solved by induction in two stages, one nested in the other. We assume that the principals can determine the rules of the game, and then the agent takes them as given and makes their choices. For example, the US has a Constitution and separation of powers in addition to requirements for transparency and even freedom of information and press, which cumulatively are aimed to reduce abuse by agents. Of course, this is a dynamic problem and we simplify it here, but the idea of establishing rules and letting the government operate within the rules is important here.
The principals know that the agent solves an expected utility (EU) maximization, where he maximizes expected utility from social well-being (S) and personal gain to the agent (G). Both S and G are determined by policy tools of the agent (X and Y). One may be decisions on tariffs while another on taxation. The reason we use the term expected utility is because there are random variables that affect the outcome and provide some cover to the principal. The decision-making of the agent can be constrained by rules that are established in advance to protect the principal. Let the rules be denoted by R. Thus, the decision-making of the agent is:
Where provides a restriction on the possible set of policies available given the legal and/or Constitutional environment. For example, there are regulations that constrain the ability of politicians to transfer money to other countries, which reduces their ability to game the system. The principals now maximize their expected utility, which is EV(S,R). Note that the utility of the principal is different than that of the agent because it doesn’t take into account the gains of the agents, and on the other hand, it may be reduced by the cost of the regulation. So the principal’s decision problem is
Since at every level of regulation, the agent determines X and Y, and the principals can calculate X and Y by solving the agent problem, then the principal chooses the R that maximizes her own welfare. The technical solution to principal-agent problems are quite complex especially when there are issues of uncertainty and heterogeneity. It’s clear that improved information on the behavior and constraints facing agents is likely to result in outcomes that reduce abuse. It is also clear that in reality Constitutions and rules of the game are always changing, agents may reduce the constraints they face, and the public may demand to increase the constraints after periods of abuse.
In the early days, the power of the ruler was ‘given’ by God. But as the idea that the power of the ruler originates with the people, history witnessed many changes, revolutions and otherwise, that increased the accountability of the government and reduced the arbitrariness of the government. For example, moving to Constitutional monarchy, where the power lies with an elected official, whose term is limited. After Watergate, the power of the presidency was curtailed. Understanding asymmetric information and the principal-agent problem which is part of it, gave rise to emphasis on systems based on establishing limits to power and monitoring and enforcing them.
The principal-agent problem emphasizes the challenge in design of government structure. On one hand, it is desirable to have governors with power and flexibility, on the hand, checks and balances are needed. Unconstrained power may lead to transformational change. For example, Napoleon changed Europe, Frederick the Great built Prussia, but Napoleon’s rule resulted in enormous calamity and outcomes. So politics is always a game between a ruler’s freedoms and mechanisms that restrain rulers and control their actions. Political economy suggests that because of the complexity of society, constraints on government must recognize the multiple dimensions of their activities. They may benefit some regions of the rich, but know there is a limit on the ability to harm the poor.
Eaton, Jonathan, Mark Gersovitz, and Joseph E. Stiglitz. “The pure theory of country risk.” European Economic Review 30, no. 3 (1986): 481-513.
Einhorn, Hillel J., and Robin M. Hogarth. “Behavioral decision theory: Processes of judgement and choice.” Annual review of psychology 32, no. 1 (1981): 53-88.
Friedman, Milton, and Leonard J. Savage. “The utility analysis of choices involving risk.” The journal of political economy (1948): 279-304.
Kahneman, D. (2011). Thinking, fast and slow. Macmillan.
Kahneman, Daniel, and Amos Tversky. “Prospect theory: An analysis of decision under risk.” Econometrica: Journal of the econometric society (1979): 263-291.
Neumann, J. von, and Oskar Morgenstern. Theory of games and economic behavior. Vol. 60. Princeton: Princeton university press, 1944.
Schultz, Theodore W. “The value of the ability to deal with disequilibria.” Journal of economic literature 13, no. 3 (1975): 827-846.
Wolf, Steven, David Just, and David Zilberman. “Between data and decisions: the organization of agricultural economic information systems.” Research policy 30, no. 1 (2001): 121-141.
Wüpper, David and Travis Lybbert. “Self-Efficacy and Economic Behavior.” Annual Review of Resource Economics 9, no. 1 (2017).