On November 22nd, Mr. Singh, ambassador for India in France, came to talk to students about Indian economy, its challenges and its opportunity.
The ambassador, during his talk, insisted on the importance of seeing how complex India is, before even trying to understand this huge country. India has complex societies (23 official languages, thousands unofficial), complex economies and complex politics. Democracy is an important challenge: the size of the electorate is 730 million, and any election requires 1 million voting booths across the country. Votes are electronic and public information is widely available, creating very large expectations from the population around election times. A current challenge is recent economic growth (around 5%) which is too low at the moment to draw people out of poverty. Even though India has a domestic demand driven economy, integration with the rest of the world was deeper than previously thought: credit availability has been squeezed, constraining production and growth. The currency has been extremely volatile recently because of high inflation, a high current account deficit (around 4% of GDP) and investors’ withdrawal of capital short term from emerging markets. But the rupee has recovered since the end of the summer and foreign direct investment (FDI) in the country remains high. The government of India notes that the fundamentals of the Indian economy remain strong in terms of demographics (large working age bracket, availability of skilled workers); international economic integration (flow of technology, significant entrepreneurship, globalized Indian firms); financial systems (investment is 35% of GDP, savings are 30%); and democracy (building consensus with society, walking in the same direction). As ambassador to France, Mr. Singh went on to give a few facts about India and France’s relationship. For example, France is India’s strongest partner in space (joint satellites); there are also strong partnerships in defense (production and supply) and nuclear energy. More than 700 French firms are present in India. One weak area would be on bilateral trade with France, where there exist imbalances levels are low compared to other major economies in Europe. Mr. Singh told us that he was impressed by the heterogeneity of origins of students in the room, as well as by the quality and pertinence of the questions asked. Questions concerned a wide range of topics: foreign companies in India, food inflation, FDI and retail, health and education, relations with China, Green Revolution, bureaucracy, growth and redistribution, safety for women, the job market... We met Mr. Singh after his talk to ask him our own questions about India. Could you give us further details on Indian economics? The fundamentals of Indian economy are very strong. Growth in any society is based on resources, capacity and opportunity. Looking at these aspects, Indian has the necessary tools to pursue it. From a demographic point of view, there is an adequate availability of technically trained manpower to provide resources for growth. India has a huge unexplored demand. Nowadays, there are people that don't have enough income. As incomes will grow, this unexplored demand will grow as well. The capacity of Indian firms and management to engage with the world economy has increased. There have been investments from abroad, experience abroad, engagement with international trends and technology development. We have to remember also that FDI has increased by 30% from last year, and it is an indicator of how investors are looking at the fundamentals of the Indian economy. What do you think about the future of the Indian economy? We need to generate significant employment and we cannot rely just on the services sector, even though it makes a good contribution in terms of GDP and export revenue. Also, we cannot rely only on agriculture. There's already a lot of pressure of people in the agricultural sector and the amount of land per person devoted to agriculture is very small. Further productivity increases support the assessment that we need to reduce the pressure we have on agriculture. To do this we have to pull people into the manufacturing sector and we have to build the capacity of the manufacturing sector. What is your personal opinion about Sen Vs Bhagwati debate? Growth and redistribution are equally important. We need to have conscious polices to ensure that the benefits of growth will go to all the segments of the population. Otherwise, they don't happen at the necessary levels. For instance, we have the National Rural Employment Guarantee scheme: a certain number of days of employment are guaranteed to one person in every family. Also, we have the national food security bill. Let's consider the right to education. It leads to empowerment, which leads to opportunities. In India, children under 14 get free compulsory education. What do you think about the National Food Security bill? It is important to bring a policy like this to ensure that even at the bottom segment of the society there is adequate nutrition. I don't doubt there could be some inefficiency in the mechanism. But I guess the challenge of the government is to address those inefficiencies and to make sure that the delivery mechanism works well. What do you think about disaster management in India? You had big gains, like the fact that a lot of people were saved during the last cyclone, Phailin, but on the other side you also have challenges. With this I am referring to the fact that 115 people had been crushed to death during a stampede in Madhya Pradesh. In India things are complex; however, over the last years, we have built up an institutional framework to deal with these problems. We have, for example, the National Disaster Management Authority. Cyclone Phailin was anticipated and the government prepared for this adequately. The Indian system managed to evacuate more than 700,000 people and all that was done really well. On the other side, the preparation for the last stampede was not adequate. But, we have for instance, the KUMBH festival. For about a month 40 million people came for that at one place and there were no incidents. So, to conclude, it is proven that the Indian system has the capacity to manage large numbers when proper effort is put into the preparation.
0 Commentaires
Microeconomic Applications: Interview with Anne Perot, B. Medaglia and Christopher Sandmann1/29/2014 Introduction Collusive behaviour, ranging from fixing market shares to abuses of dominant position, has long given rise to concern from economists. At the same time, research examining the feasibility of and empirical evidence on such behaviour ranks amongst the most exciting taking place in Economics. What kind of environments support collusive behaviour, and how can welfare losses be estimated? These are questions that are not only relevant to IO research but also in institutions such as the Board de la Concurrence of France. Consequently it was a great pleasure to welcome someone as distinguished as Anne Perot to the TSE Business Talks. She is not only a former vice president of the French competition authority, but also a former Paris 1 and ENSAE full professor and now a partner at MAPP consultancy. Case presented during the TSE Business Talk on October 17th In June 2012, Groupe Casino, previously owner of 50% of Monoprix SA, acquired the additionnal 50% of shares held by Galeries Lafayette. While the acquisition had long been consented to by both companies, competition issues were at the core of the debate within the competition authority, which had to decide on the takeover. This is when MAPP, a Paris and Brusssel based consultancy mainly focusing on competition issues, came into play. By nature of the retailing firms, they are direct competitors and this proved to be especially valid within the city of Paris. Hence Groupe Casino asked the consultancy to perform an economic analysis on whether the market power hold by the new company was too big. How would MAPP answer this question in order to convince the competition authority? Clearly both Casino and Monoprix had a very high density of supermarkets within Paris. However, their ability to exert market power depended heavily on the competitive pressure imposed by hypermarkets outside of Paris. If cus- tomers were willing to consider rather distant hypermarkets when buying food then the market power held by Casino and Monoprix would not be sufficient to reject the plan. To answer this question, MAPP used a wide range of data including Ipsos surveys, Nielsen data base and index prices from Monoprix and Casino itself. The consultants went on to define a utility func- tion that would include age, family situation and the location within Paris as parameters in order to derive demand functions. The behaviour of consumers would then be estimated apply- ing a logit model. Indeed the results were quite illuminating. Based on the analysis, Casino convinced the competition authority to consent with the takeover. At the same time the authority had a powerful tool at hand to assess in which areas the market power of the new company would exceed an exceptable level. Thus in the end Casino was required to sell 55 supermarkets, of which 53 were located in Paris. (Case presented during the TSE Business Talk on October 17th) Interview Uncovering secret collusions and abuse of market power - isn’t that rather the plot for a good detective story than formal economic theory? How and why do these investigations usually begin? Well, it is sure that detecting a cartel has something to do with a detective story. But you may know the policy imple- mented some ten years ago in France that we call the leniency program. The leniency program guarantees total immunity of sanctions to firms that cooperate with the competition author- ity and are themselves member of the cartel, in order to see who the other firms involved in the cartel are, how the cartel works and what the increase in prices were, for instance. This leniency program is one of the main tools used to detect secret cartels. When a firm comes to the competition authority to say “I am a member of a cartel and I will tell the names of the other members involved”, usually the competition authority will request further informa- tion on the form taken by this cartel. Very often the competition authority will let the cartel keep going until they have sufficiently identified the individ- uals participating in it; once they have identified the people involved, they will come out and stop the cartel, and then cooperation between the authority and the firm can begin. But you see there is still a little bit of a detective story. Do you think that firms often adopt collusive behavior or other types of anticompetitive practices? Are there estimates on welfare losses? Well, it is difficult to say because what we observe is an important feature of the leniency program. The leniency pro- gram is a very efficient tool to discover a cartel. However, if you are involved in a cartel then you benefit from the cartel because it allows you to have higher prices and higher profits. This trade-off only changes when it is less profitable to remain in the cartel than to stop the cartel, and if this happens it is because the cartel is less profitable than it used to be. So now in the economy there are probably a lot of cartels that work very well which we don’t observe because no one involved has an incentive to stop them. Are sanctions credible? Why can’t they be a sufficient incentive for companies not to collude or abuse market power? There has been a lot of work during the last years among economists and academics to try to see whether the fines imposed by the European Commission are sufficient to discour- age anti-competitive behavior. All these studies are not totally consistent with one another. Some say that fines are not sufficient to give the appropriate incentive to behave in a competitive way, some others find that fines are sufficient. What we observe is that a good way to say whether fines are sufficiently high is to see whether the same firm is detected several times in a cartel. If it is the case it means that the fine imposed in the first time was clearly not enough to convince the firm to behave in a pro-competitive way. Here we should have a real rigorous analysis to see who are the firms who repeat anti-competitive practices, who are punished two, three or four times in order to have an idea of whether or not these fines are at a sufficiently high level. Secondly there is a discussion in Europe over whether we should favour a policy which would not impose particular monetary sanctions but jail sanctions for the people who are responsible, that is, commercial directors for instance who are in charge of the prices. These people clearly have a huge responsibil- ity. In the U.S. you have jail sentences for the people who are responsible in cartel cases but not for abuse of domi- nant position. And this is clearly a very strong incentive. Thus as economists we have to ask ourselves if we prefer to take value from the firm which has to pay a fine or put a person in jail and possibly destroy people? This topics has been discussed in many panels. One of them involved the competition authority in England. They argued that the level of standard of proof was so high in order to prove that a person should go to jail that in fact it was never applied. But if you have a policy which is impos- sible to implement and in the end nobody goes to jail, and at the same time you have no important financial sanctions, you have lost the two major sanctions that make an impact. What industries do you find most often involved in anti-competitive behavior? In the near past, telecoms. I would say for cartels in general, base chemical products or raw materials; that is, all products that are not differentiated. Since the goods are homogeneous, the market is exclusively characterised by price competition and no competition through product differenciation. This is very bad for profits and gives an incen- tive to cartelise the industry. Secondly, we see a lot of cartels in crisis periods, and of course crises do not affect all sectors equally. For instance, some sectors are protected by the fact that there is innovation. In fact, for a very innovative sector there is almost no collusion. For example, if you work on cancer products, then you do not compete with another firm specialised in, say, infectious diseases. This is why in the pharmaceutical industry you have a lot of abuse of dominant position, but firms do not seem to collude much, since they work on very differentiated products. Is there a link between the number of firms involved in a cartel and its effectiveness? What is important for the success of a cartel? Not too many firms! There is an academic paper in game theory whose title is “four are few and six are many.” This paper shows that when there are four competitors in the market it is rela- tively easy to implement a cartel policy, to watch what the others are doing and punish free-riding firms. Whereas when there are many firms, and six firms can be a lot, then it is very hard to make the cartel successful. If on the other hand you are two firms among, say, ten of equal size your market share is simply too small to make the cartel work. All the others will price at a lower level, take your market share and force you to exit the market. This is the kind of trade-off you face. Thus the ideal situation for a cartel is the occurence of not too many oligopolists with all the oligoplists involved in the cartel. Do you feel that there are still prejudices against women in economics? I cannot speak generally. I was involved in three different professional lives. First as an academic, second in the competition authority, now in a consul- tancy. What I can say is that academic life is very very hard for women. Why? Being between 30 and 40 in the field of research, in order to be seriously considered you must be prepared to go to many conferences and with small children this is not easy at all. But I think this should be a problem for men who have small children too. From what I observed at Paris 1 I can say that below the position of professors, that is, assis- tant professors for instance, the gender ratio was almost half half. But at at a level of professors there were 50 profes- sors in the Economics department and we were four women and I think this is really a small number. After that in the competition authority it was a very egalitarian world, probably because in law there are much more women than in economics. In law it was very common to find women everywhere at any position, including vice president. Before me there were other female vice presidents and in addition to me there was another women who was also vice-president. But for business lawyers and consul- tants I think that the problem of women is very hard to solve. I see that with the young women we have in Mapp. For instance we have a senior consultant, a young woman who just had a child nine months ago. Sometimes she is involved in a project where we have a deadline in two days. Then she can spend 48 hours with very short nights without seeing her child. This is not easy at all. But again this is true for any young parent, including men. What kind of profile is your consultancy MAPP looking for? What we want for professional skills are people that are used to IO arguments and are familiar with the basic literature, that is the type of models you will find in the Tirole or Massimo Motta. We do not necessarily hire perfect econome- tricians but people who have the ability to switch from theory to the real world. People who are used to saying, “ this is compatible with my microeconomic story and this is not, so what should I do with this piece of evidence that does not fit with the theory I had in mind at first?” What is the best way to apply? Just send us an application letter and a CV. We take a lot of interns and in fact we do not want to hire anyone without having completed an internship. By labour market regulation we can only have six month internships in the same academic year, but we do not want less. Further reading: A simple model of imperfect competition, where 4 are few and 6 are many International Journal of Game Theory; 1973, Volume 2, Issue 1, pp 141-201 http://www.imw.uni-bielefeld.de/papers/files/imw-wp-8.pdf Latest published anti-trust cases of the European Comission http://ec.europa.eu/competition/elojade/isef/index.cfm?fuseaction=dsp_at_by_date Ariel Pakes is the Thomas Professor of Economics in the Department of Economics at Harvard University. Professor Pakes was educated in the Hebrew University of Jerusalem before obtaining his Ph.D. at Harvard University. Subsequently, he taught at the Hebrew University before being appointed professor in 1984. Prestigious universities such as Wisconsin University, Yale, the University of Chicago and NYU have had the pleasure to have him either by a visiting or a professor position across the years. Professor Pakes was named the distinguished fellow of the Industrial Organization Society in 2007. The American Academy of Arts and Sciences elected him fellow in 2002. He was awarded the Frisch Medal by the Econometric Society in 1986, and was elected as a fellow of that society in 1988. Professor Pakes’ research has been in Econometric Theory, Industrial Organization (I.O.), and the Economics of Productivity and Technological Change. He has developed tools that allow the empirical analysis of I.O. models. Recently, he has investigated ways of simplifying estimation and inferences of complex behavioral models throughout the use of weaker equilibrium concepts and inequality constraints. His recent empirical work includes an analysis of the impact of US health reforms on hospital choices, hospital prices and financial incentives to physicians. It includes as well the impact of the break up of AT&T on productivity in the telecommunication equipment industry. Most of your research is focused on developing tools to answer difficult empirical questions, mainly simulation and semi-parametric models. Could you briefly explain how are they useful and under which context they are used?
I can give you examples. An individual’s demand might be easy to estimate, but firm decision making depends on aggregate demand. To construct aggregate demand we have to sum up over individual demands and simulation can do that. Suppose that I want to both allow each individual to have different income and allow the price coefficient to depend on income. If I can simulate different people, I can predict what each would do conditional on their income and then add up demand over the simulated individuals. So simulation is really just an integral, an easy way to do a summation. Before you could do it, there was no way of getting reasonable demand estimates for firms. Daniel McFadden cited this problem when he did logit demand systems (it was in the psychology literature before that). He called it the Independent of Irrelevant Alternatives. Take the car market. If you bought a Rolls Royce and I bought a Mini Van and if there were no way to distinguish the characteristics I preferred from the characteristics you preferred, our second choice car would be the same: the car with the biggest share. This does not make any sense. Allowing for differences among indivduals that are associated with the price of the car they purchase, allows the second choice of the person who chose the cheaper car to be cheaper. For the use of semi-parametrics, I will take an example from production function estimation. We often estimate production functions in order to use them to analyze productivity. Usually, we do it after there has been a major change in the industry: when there is a merger between the two biggest firms or, in my case, when AT&T was broken up. To analyze what happens after such major events we follow firms over time. However some of the firms drop out. If you just keep the firms who stay all the time, you can get a very biased view. Often the firms that exit are the firms that the change impacted negatively. So if you keep only the firms that stay, you get a positively biased view of the impact of the change. Of course one has to be careful that model the model used is appropriate for the markets studied. For example, in biotech there are a lot of little firms that attempt to develop new products. When they develop something really good, they sell it. Why do they sell it? Because they are not as good at producing and marketing the good as some of the bigger established firms. So these firms sometimes exit because they were successful, not because they were failures. You need a model of who drops out to guide the analysis. Those models are actually quite complicated because you have to build in the pricing equilibrium and everything else that determines investment incentives. Once the overall model is specified, exit will only a function of certain variables. It will be a complex function that depends on a lot of details of the model, but you can control for the drop out probability by just conditioning on the variables it depends upon in a non-parameteric way. This and some other details that use semi-parametrics to take account of the fact that productivity may be correlated with input choices, enable you to get estimates for the production function, which in turn allows you to go back and analyze productivity. Olley & Pakes (1996, Econometrica 64: 1263–1297) provide the needed details. As econometric tools and rich datasets become ever more available, more complex models can be used. How frequently are static BLP models and dynamic models used in antitrust and merger cases? What weight do the competition authorities put on such analysis? It is changing slowly and they are being used more. The problem is not that people do not want to do it; it is that you need to have an answer for the court when the court convenes. Merger and anti-trust analysis goes in two stages. There is a first preliminary stage, where the authorities decide whether they are going to investigate the case thoroughly or they are just going to let it happen. That preliminary stage has to be quicker than you can typically do BLP. There are some exceptions. They are doing things like BLP in some Health Care cases now, because, they have been so interested in that industry that they have datasets up and running. It will probably become more and more like that, but the basic constraint there is just time. BLP is now starting to be used by both the authorities and the private sector. I do not think the dynamics have been used nearly as much, except for in research. The dynamic models are more complicated and require more assumptions. Of course all of these models are approximations. Since somebody is going to make a decision and the only question for the authorities is: among those who can answer the question in time, who has the best approximation? The time constraint typically kills the dynamic models, but there is a movement now to use simpler notions of equilibria. These notions encompass the standard notion, but also admit equilibria which are easier to compute, and might actually also approximate better (the paper in the QJE by Chaim Ferhstman and I, is an example). Given stringent data requirements and time restrictions issues, full merger simulations are difficult to undertake. One solution is using the upward pricing pressure (UPP) index. What is your opinion of using such tool? The UPP depends on a particular institutional structure. If the merger takes place between upstream firms in a vertical industry, UPP does not make any sense. The upstream firms are selling to a downstream firm who is then re-marketing to consumers. The way prices are set between the agents is through a bargaining process. The problem of this sort I have thought most about is bargaining between hospitals and insurers. What happens? The hospitals have costs; the insurers sell insurance policies and get premiums for it. The contracts between the hospitals and the insurers split the profits between the costs and the premiums. The question is, who gets more of the profits? It depends on what the outside alternative is for each agent. If I have the only children’s hospital in Boston no insurer who wants to insure families can dare not to have me because they will never get families. That means I will get a lot of the profits because the outside option of the insurer is very small. He will try to keep me in at any cost, and I will know that. That is a different equilibrium concept than the one that is behind the UPP. The equilibrium concept which implicitly is being used in the UPP calculation is Nash in prices. Depending on the setting, it can be the right thing to do. Your papers on moment inequalities present a general framework that can be used to analyze single and multiple agent problems. Can you provide us with the intuition behind it throughout a policy relevant application? The intuition for moment inequalities is revealed preference. Take an example from Joy Ishii’s work. She analyzes how a bank chooses a number of ATMs (Automatic teller machines). There is a cost of installing and operating an ATM that we do not observe. We do know how much it costs to buy one, but we do not know how much it costs to keep it going, service it, and fix it when it goes wrong. We need to estimate that cost in order to analyze the impact of different legislation on welfare. Here is one way to get an estimate. Joy estimated a model which allows you to compute how much more revenue the firm will get if it invests in one more ATM. It must be the case that if you choose not to do one more ATM, the cost must have been greater than the expected profit gain. That gives me an inequality from one side of the cost: I have a lower bound. Also, I know that the firm chose the last ATM. So, the profit it got from the last one is greater than the cost. That gives me an upper bound for the cost. Now I have a lower and an upper bound. In the US there was a congressional committee that was deciding whether they should make the fee for using an ATM the same for all the ATMs; whether the customer belonged to the bank that owned it or not. The logic was that you should not have to walk 10 miles to your banks’ ATM, it is the same network; you might as well go to the closer one owned by a comptitor’s bank.. The banks said, “Well if you did that, somehow we have to pay for the ATMs”. So, we have to figure out what we have to charge customers at every bank just to maintain the system. Joy Ishii did the counterfactual of what will happen if everybody pays the same costs no matter where you go. The result was that the consumers did not benefit very much from implementing the proposed legislation. What happened was that there was a very big restructuring of the market. Big banks got smaller, smaller banks got bigger but the actual consumer surplus stayed around the same. The cost of the ATMs to consumers went up. The big banks were subsidizing ATM’s and making you pay through lower interest rates on demand deposits. Even though there are prominent researchers in empirical industrial organization in the EU, there seem to be fewer in relative terms than in the U.S. Why do you think it is the case? It’s true. Much of the empirical I.O. started in the U.S., whereas Europe was always strong in theory and theoretical econometrics, especially, France. There was not much of an empirical tradition, at least, that I am aware of. I think that it is interesting that the part of the empirical work that Europeans are picking up on is the part that combines theory with empirical work. And I think the reason is that everybody around here knows some theory. That is making it easy for you to catch up. There is a long history of empirical work in the U.S. starting with agricultural research stations because they had good data. The United States had a lot of money to throw into that. After the Second World War when all the empirical work started, Europe did not have as many resources as the U.S. had. I am surprised of how many empirical people are here now. It is really different from when I used to come before. 6) How important is it for you research to meet with people working in industry? I think a lot. Different industries have different institutions, and we cannot work with models that are general enough to adequately approximate behavior in all industries.. As a result you have to have enough knowledge to tailor the work to the details of the industry studied. Since I am mostly a methodologist, when I work on an empirical problem I tend to work with somebody who knows a lot about the industry studied. My last paper was with Kate Ho, who knows about hospitals, health care and hospital choice; Steve Olley knew about telecommunication systems; Levinsohn studied autos. What is true about I.O. now is that there is too much for one person to know. So working in groups is good. And also it is fun! It is just much more fun working with another person. 7) What are the next big questions in I.O. and what are the most promising methodologies being used? In empirical work researchers are likely to start worrying about different equilibrium assumptions; not Nash in prices, or Nash in quantities. They are also likely to worry more about how firms form perceptions about what other firms are going to do, not just assume an instantaneous Bayes-Nash equilibrium. Some of the decisions that firms make involve very complicated processes, and it is very hard to think that firms know how to compute what standard theory says it does. The complexity issues show up a lot in dynamics. We will start to figure out ways of simplifying the analysis. There are equilibria that do not require so much of the firm or the consumer. They do not require so much information or computation. I think the other set of research questions are markets where Nash in prices does not apply: vertical markets, partially regulated markets and platform markets. Estimating demand and costs will go into subsequent analysis of these issues, but the focus will be on how to do empirical analysis in these and in dynamic market settings. Ijoined the TSE in September 2013 as an Assistant Professor. Although my research interests are reasonably broad, I mainly work on theoretical industrial organisation. An area of special interest to me is consumer search. Imagine there is a particular product that you wish to buy: for example, a book or a grocery item. According to the canonical textbook model, you know precisely which firms sell that product, and how much they charge. Armed with this information, you purchase it at the lowest price avail- able. Firms price aggressively in order to persuade you to buy the product from them. Unfortunately few real-world markets work like this. For example, most consumers are poorly-informed about prices. Usually consumers can only learn about prices by visiting websites and wandering around stores - both of which are time-consuming, and therefore costly. One might conjecture that if the cost of gathering addi- tional prices is small - as for example might be the case online - this imperfect consumer information would make little difference. Surprisingly, economic theory suggests that this conjecture is false. This is the conclusion of the well-known “Diamond Paradox”, named after the Nobel laureate Peter Diamond. To gain some insights into how search costs affect equilibrium prices, consider the follow- ing simplified model. Suppose there are two symmetric firms which stock the same product. Consumers have heterogeneous valuations for this product, distributed on an interval [a, b]. In order to learn a retailer’s price (and then buy its product), consumers must incur a search cost s > 0. How should consumers optimally behave in such an environment? To begin with, consumers must form a belief about how much each firm is charging. To simplify mat- ters, let us suppose that both firms are expected to charge the same price pe. Consumers whose valuation exceeds pe + s expect to earn positive surplus, and therefore search one ran- domly selected retailer; other consumers believe that search is not worthwhile, and stay at home. How should the firms in this market behave? These firms are free to choose any price they like - in particular, they are under no obligation to charge pe. There are two separate cases of interest. Firstly when pe + s > b the expected price is so high that no consumer searches. Firms face zero demand, and are therefore indifferent about what price to charge. As a result any pe > b - s constitutes an equilibrium, in a trivial sense. Secondly when pe + s ≤ b the expected price is sufficiently low that some consumers do search. The problem here is that once a consumer enters a store or visits a website, she reveals that her valuation exceeds pe + s. Moreover her search cost is sunk - if she decides that she wants to buy from the other retailer, she must incur an additional s. Consequently if consumers expect a price pe, each retailer can charge pe + s and still sell to every consumer who searches it. Equivalently, it is not rational for consumers to expect a price that satisfies pe + s ≤ b. The only possible outcome of the game is the first one, in which consumers expect high prices and therefore do not search. Hence we have a paradox. Perfect information leads to strong competition and low prices; small amounts of imperfect information lead to high prices and market breakdown (1). This paradox is an important and intriguing result. Several authors have suggested ways to weaken it. One class of mod- els attacks the problem from the consumer side. For example, consumers might plausibly learn several prices during a single search (Burdett and Judd 1983). Alternatively, some consum- ers may enjoy shopping around and comparing prices (Varian 1980, Stahl 1989). In both cases, firms have an incentive to set relatively low prices, in an attempt to win business from the better-informed consumers. Another class of models attacks the Diamond paradox from the firm side. Essentially the paradox arises because firms cannot commit not to ‘hold up’ consumers ex post with a high price. Therefore if firms can inform consumers about their price via advertising, they can guarantee consumers some surplus (Wernerfelt 1994, Anderson and Renault 2006). However, an important point is that papers within this literature have traditionally made the (implicit) assumption that firms sell only one product. My research on this topic seeks to relax the assumption of single-product retailers. From a practical point of view, most firms do sell a wide range of products. Moreover, consumers frequently buy several items in one shopping trip. From a theoretical point of view, allowing firms to stock multiple products can also overcome the above ‘no search problem’. Intuitively this is because in the single-product case, only consumers with a high valuation decide to search, so retailers exploit this and charge a high price. However, in the multi- product case, somebody with a low valuation on one product may search because she has a high valuation on another. This weakens a firm’s incentive to hold-up consumers: when increasing one of its prices, it loses demand from consumers who like the product a little, but who are primarily shopping for something else. As such, it is possible to construct an equilibrium where consumers have correct expectations about each retailer’s prices, and still find it optimal to search. Once we have this equilibrium, we are potentially able to answer several other interesting questions. For example, when consumers have search costs, what are the advantages to a retailer from stocking a wider range of products? How do pric- ing incentives change when a firm sells more products? Some retailers send out adverts, containing information about the prices of a small proportion of their total product range. How much can consumers learn from these adverts? For example, if a firm offers a good deal on one product, should consumers expect the firm to raise the prices of its other goods? Significant progress in answering these questions can be made using the following model. Suppose there are two firms, each of which sells the same n products. Consumers regard these products as independent, and would like to buy one unit of each. Valuations for each of the products are drawn independently from an identical distribution. As before, assume that consumers must incur a cost s > 0 in order to travel to a retailer and learn about its prices. In addi-tion, suppose that some consumers are ‘loyal’ to a particular store (and will only shop there), whilst others are ‘non-loyal’ and are happy to shop wherever they think they can get the best value for money. The move order of the game is then as follows. In the first stage, the two firms simultaneously choose their prices. They also have the opportunity to pay an advertising cost, and inform consumers about one of their prices. At the second stage, consumers observe adverts (if any) and form expectations about the prices being charged by each retailer. Consumers then choose between staying at home, or searching one of the two firms. Non-loyal consumers then have the opportunity to search the other firm if they wish. Consumers observe the actual prices being charged by the firms they have searched, and then make their purchases. As a benchmark, first suppose that neither retailer chooses to advertise (for example because the cost of doing so is prohibi- tively large). As discussed above, whenever the firms’ product ranges are sufficiently broad, there exists an equilibrium in which consumers search. Moreover, this equilibrium is symmetric: firms charge the same prices, and therefore all consumers search at most once. We can also prove that when the firms stock more products, they charge lower prices on each individual product. Intuitively, a small retailer is searched by a relatively small group of consumers, who have high valuations on many of its products. A large retailer, on the other hand, offers many more products on which positive surplus can be earned. As a result, a larger retailer is searched by a larger number of consumers, who on average have a lower valuation for any individual product. Larger retailers should therefore charge lower prices, because they endogenously attract consumers who are more price-sensitive. Now suppose that a retailer sends out an advert, containing the price of one of its products. Consider a thought experiment in which the firm exogenously varies the level of its advertised price. Notice that as the advertised price falls, some new consumers who like the advertised good decide to search. Since they were not previously searching, these additional consumers must have relatively low valuations. Therefore, anticipating this, the firm finds it optimal to also reduce its unadvertised prices in order to sell more products to these new searchers. Therefore consumers (rationally) expect a positive rela- tionship between a firm’s advertised and unadvertised prices. This happens even when products are completely independent and unrelated. Finally, consider how firms choose their overall advertising strategy. Assuming the advertising cost is not too large, we can prove that a firm optimally behaves in the following way. Sometimes it charges a high ‘regular price’ on each product and does not advertise. Other times it advertises a low price on one randomly selected product. The discount on that product is also random and drawn from a distribution. Intuitively a firm must randomise in all three dimensions, otherwise its rival might be able to guess and undermine its promotional strategy. In light of the positive relationship between advertised and unadvertised prices, randomness in advertised prices generates (from an ex ante perspective) randomness even in the prices of products which are not being advertised. As such, price dispersion is a robust feature of the model. Overall then, even a relatively simple model can gen- erate quite rich predictions about how retailers should choose their pricing and advertising strategies when consumers face search costs. [1] See Diamond (1971) and Stiglitz (1979). When individual consumers have elastic (rather than unit) demands, firms end up charging the same price as a monopolist. In this alternative setting, consumers do search and the market does not break down. Nevertheless the equilibrium price is still very different from the full-information case. References Anderson, S. and Renault, R. (2006) Advertising Contentí, American Economic Review 96(1), 93-113. Burdett, K. and Judd, K. (1983): Equilibrium Price Dispersioní, Econometrica 51(4), 955-969. Diamond, P. (1971): A Model of Price Adjustmentí, Journal of Economic Theory 3, 156-168. Stahl, D. (1989): Oligopolistic Pricing with Sequential Consumer Searchí, The American Economic Review 79(4), 700-712. Stiglitz, J. (1979): Equilibrium in Product Markets with Imperfect Informationí, The American Economic Review 69(2) Papers and Proceedings, 339-345. 4 Varian, H. (1980): A Model of Salesí, The American Economic Review 70(4), 651- 659. Wernerfelt, B. (1994): Selling Formats for Search Goodsí, Marketing Science 13(3), 298-309. Jaume Ventura is a Senior Researcher at CREI and Professor at UPF. Prior to joining CREI and UPF, he was a tenured associate professor at MIT. He has also taught at the University of Chicago, Northwestern University, London Business School, and INSEAD. Professor Ventura has worked full- time for the World Bank, and acted as a consultant for the Inter-American Development Bank. He is a Fellow of the European Economic Association, Faculty Research Fellow of NBER, and a Research Fellow of CEPR, where he has been Co-Director of the International Macroeconomics program (2007-11). He has been editor of the Economic Journal and associate editor of the Quarterly Journal of Economics, Review of Economics and Statistics, and Journal of the European Economics Association. His research specializes in Macroeconomics and International Economics. He is a member of the Barcelona GSE’s Academic Programs Committee and Director of the Barcelona GSE Master Program in International Trade, Finance and Development. You received the prestigious ERC grant in 2010 for your project “Asset Bubbles and Economic Policy”. What can you tell us about your main find- ings up until now and how do you plan to extend your research?
The idea that we have pursued for quite a while now is to try to understand how these brusque changes in asset prices can affect the macroeconomy. So with the help mostly of my coauthor Alberto Martin, we have developed a series of models in which we show how the presence of bubbles affects the work- ings of the macroeconomy. And indeed it is quite appropriate that you ask here, because what we have done is to take a very old model from a professor of yours, from Jean Tirole. Jean Tirole built a model of asset bubbles back in the early 80s; in 1985 he published a paper which is a seminal paper on asset bubbles in an overlapping generations economy, into which he took the standard growth model and found that there are many equilibria in which prices have a bubble component. Now, the model from the point of view of the theory was fascinating, because we didn't know that the neoclassical model had so many equilibria, and we didn't know what they looked like. But from the point of view of applied macro it was a little bit disappointing, because when the bubbles came in what they did was to crowd out capital; people, rather than investing, would buy the speculative investments, and then when the bubbles arose the capital stock and output would go down. So you would have a theory where there might be bubble episodes, but when these bubbly episodes take place, actually you are in a recession, and it is when the bubbles col- lapses that the economy goes back into a boom, because the bubbly as- set disappears and then people start to invest and start to accumulate capital, and so on. The paper was very nice because these bubbles were rational and you could see how they worked, but obviously it was not a very good correlation. So what I've been working on is models where these same bubbles exist, but they have additional effects; and in particular, when you introduce financial frictions into the Tirole model, then you find that bubbles have two effects: on the one hand, they provide an alternative asset that has the effect that Jean Tirole worked out of crowding out. On the other hand, when you have bubbles, they relax credit constraints and allow some of the firms to borrow more. And that creates an expansionary impact. And all my work has been analysing what of the two effects works and what policies are appropriate under different circumstances. [laughs] I don't know if I answered your question too much. So will you be continuing this line of research going forward? That is the plan, yes. Even after the end of the grant in 2015? Presumably I haven't exhausted the things to say. To be honest, as long as I have things that I think are interesting I will continue doing that, so... hopefully yes. Your research work has contributed a lot in the understanding of bubbly episodes and the introduction of bubbles as a main element of macro- economic models. Is it worth letting them exist or should we try to prevent their emergence? In the models we have developed in this line of research, bubbles play a very important role: they help markets to function. Let me put it this way. Imagine that there are some firms that have very good investment opportu- nities but they are credit constrained. There are other firms that perhaps are investing because the costs of funds are cheap despite not having really good investment opportunities. These firms do not lend to the others because the market is not working well and it does so because these other firms do not have enough collateral. In our models, bubbles create wealth and create col- lateral and allow the financial market to work better, to transfer resources from the lower productivity to the higher productivity investors. When bubbles do that, in general, we find that bubbles are helping the economy to function. They are the ones that allow the best investments to take place and the highest growth to take place. So, in this sense, my research (or our research be- cause it is mostly with Alberto Martín) has led to the conclusion that actually the bubbles are not really the problem, the problem is their collapse. The typical notion of the bubble outside of the academic market, shared by policy-makers and non-academic economists, is that bubbles are a mistake; markets are making a mistake. And if you have a mistake, the signals are scrambled. And when signals are scrambled, you make the wrong investments. And this is bad because it lowers productivity. And what you need to do is to set the signals right. In our work, that is not the case. In our work bubbles are rational pyramid schemes that provide wealth and help financial markets to work better. So, one of the things that we find is that sometimes governments have an important role helping to sustain these bubbles, not to eliminate them. That does not mean that there are not bad bubbles. But typically, the bad bubbles are contrac- tionary; they are not associated with economic growth. So, it is no longer “let the markets do their job and do not invervene”. Markets have lots of limitations. And indeed, in our research like in the early [ Tirole] research, bubbles appear because there is something wrong in the markets and they in part solve that problem. Bubbles are fragile because they depend on expectations on the bubble component. For example, we think about bubbles in terms of an excess of equity prices so imagine that when you look at the net present value of a firm its value is 100, but it is triggered at 120. Why would that be? Well, because somebody expects the price to be also overvalued tomorrow. That is a bubble that is rational. But if somebody changes their mind, the overvaluation can change. So, bubbles are fragile. But when they exist they play a role. Or a credit bubble. For example, a firm borrows in excess of the cash flows that it can generate. But why does the firm borrow? Because the creditors know that the firm will borrow tomorrow to pay them back. And tomorrow why new creditors will appear? Because they expect that the firm the following year will borrow to pay them back. These pyramids schemes or these Ponzi games that we usually think are bad, ac- tually they can be part of a competitive equilibrium in a market with rational traders. And actually they play a useful role. How policy makers can react when bubbles explode? They should try to sustain it. And if it is not possible, there is no way. There is a common view that links the existence of the bubble with the fact that we are doing very poorly now, it is about “paying for the sins of the past”. I do not see hard evidence that supports this view. The only thing that I see is that when the bubble was there we were growing a lot and we had a lot of intermediation. When the bubble collapses, we are not growing and there is no intermediation. And I look at the world and it is not fundamentally different. We have the same firms, the same people, the same human capital; we have the same in- stitutions... It is not like that suddenly we have an earthquake and we have lost half of our capital stock; or that we forget our education and the human capital in the economy has changed. We are basically the same but we are totally disorganized and I think that the bubble helps us to organize by pro- viding the collateral that the economy needs to work. Let’s take a closer look to the Eurozone crisis. The initial rescue packages that were implemented in the Southern member states did not seem to perform very well and they had to be revised. Some people believe that these initial packages were mo- tivated by an element of punishment rather than incentives for recovery? Do you agree with this view? Could this be a reason for not performing as well as it was expected? I agree very little with that. Actually, the punishment aspect was very small and I think that I personally would have made it larger to put it in this way. European economies or economies from industrial countries have been always telling poor countries that if they have a fiscal problem they have to adjust. We have used the IMF for that. We have given them programs which typically were very short like three years; there were penalty rates and very tough conditions. Now we have some problems in some European countries. The first programs were programs with the IMF and some European nations. For example, the first Greek one. Then, programs were designed in a more organized way with what is called the “Troika” and so on. The only one that kept putting some penalties was the part of the funding coming from the IMF. The other parts had no penalty rates. They were at very low market rates. And the conditions have been 10 years, 15 years, some of the Greek bonds that have been rescheduled are at 30 years. So the conditions are far better than anything we gave before to the countries that came to the IMF. Were we wrong before? Are we wrong now? I do not know. I do not think that this is the major problem. I think that there are a lot of reasons for which the countries are not recovering quickly. The Euro crisis has exacerbated the movement for independence in Catalonia. You yourself are part of the Wilson Collective. What can you tell us about the economic foundations to the independence movement and about your work in this group? Do you think this separatist movement is com- patible with calls for greater European unity? You are asking me various questions; let me go step by step. Fist, let me talk about the Wilson Initiative because pre- sumably many of your readers will not know what the Wilson Initiative is. As you know, there had been a lot of calls in Catalonia for having a referendum on whether Catalonia wants to remain within Spain or wants to have a new State on its own as an independent country. This has been a grassroots movement that has come from people and has been crystallized specially in a couple of very prominent demonstra- tions on our National day last year and this year as well. There was also a vari- ety of other popular demonstrations and the result of the elections of last November of the Catalan Parliament, where a majority of seats that were elected are in favour of having this sort of referendum; in particular 107 out of a 135 seats are from parties that call for having this referendum. When this started, the Spanish press and a bunch of other establishments in Spain started to publish a number of studies that basically painted a very dark picture about what an indepen- dent Catalonia would look like. They mentioned things like that Catalonia’s GDP would fall 30%, that Catalonia would be unable to pay for the pen- sions, that Catalonia would be left outside of the international community and, as a result, not able to enjoy of all the treatments that we enjoy right now the Catalans as citizens of Spain, and so on. Most of this campaign and most of these claims are plenty wrong, absolutely wrong and, basically, from any academic view point. Yet, most of the population is sensitive to these sort of claims appearing in the press, being told by government officials and so on. So, there were few academics, in partic- ular six of us, five of us are economists and one of us is a political scientist, all of us have been tenured in top institutions in the United States and we are friends, and we talk, and we agreed on the need to do something, we need to, somehow, explain to the Catalan people what is the reality of this, what we know and what we don’t know, be- cause there is also a lot of uncertainty. That is why we created a webpage and we wrote a series of documents on a variety of topics related to the eco- nomic independence and also we have made ourselves available to the press and to various media outlets and they have, actually, taken that commitment seriously, so we participate actively in debates, discussions and so on. So that is what Wilson Initiative is. Now, we will move to the second ques- tion. The Catalan independent move- ment it is not only economics. There is a lot of it which is about the treatment of Catalan culture within the Spanish legal system (a mistreatment, an indepen- dent Catalan would say). Since you are asking me about economics, I will not mention about the cultural part but I will tell you that it is as important or, for many Catalans, is much more important that the economic part of it. We also think that Catalonia is mis- treated from an economic view point: over the last thirty years, Catalonia has made a transfer of an average between 8.5% of its GDP to the rest of Spain, which is the difference about what we pay in taxes and what is spent in Catalonia and we think that this is unfair, because we would do much better if we could spend part of this money in Catalonia. The other aspect is that most of this mistreatment does not come from the fact that Catalans are treated differently when it comes to paying taxes: we pay the same taxes, we receive the same pensions as the rest of Spain and we have the same unemployment compensation. Most of this mistreatment comes from the investments that the Spanish State makes in the different regions. Just to give an example, Catalonia has one of the highest capital/labour ratios in Spain and has the lowest stock of public capital per person in the whole of Spain. Last week, the Government announced the proposed budget for next year: Catalonia contributes close to 20% of the resources that Government has in terms of taxation but investment in Catalonia is only 9%, and this is slightly higher than the average of the last thirty years. However, when the Catalans complain, the answer from the Ministerio in Madrid is that Catalans should not cry, that now is the time to invest elsewhere. So, we think that this is unfair. For the last question about the compat- ibility between a Catalonia separatist movement and the calls for a greater European unity, I will give two answers. The first is that I think that what we have to unite around is the values that underlay the European Union, values of human rights, democracy, respect of the individual and so on. And nothing will unite us more that having a dem- ocratic decision, not only on who we want to have as a leader, not only about a bunch of other things that we decide but also about which are the right bor- ders and which are the forms of State that we want. So, I think that a vote in which the Catalans can express freely what they decide would be something that unites us all because we are all democrats. The second answer is a more technical answer: I think that there was a time (and this is something I’ve been thinking lately, it is something that I’m thinking about researching) in which the borders of the markets and the borders of the nations, in terms of cul- ture were very close. Having the same Government that takes care of the economic and cultural issues, where economic could mean having an own currency, regulating banks or the labour market and where culture could be the type of education that you want for your children, the type of civil law, the type of judicial system you desire, and so on. They matched each other and it makes sense to have Governments that have that size. As time went up, the market borders have expanded dramat- ically but the borders of culture have not expanded that much. There is a lot of global homogenization, that is true, but still you can differentiate different regions. As this happens the nation state expanded and, actually, you have Governements that they are too large vis à vis the cultural borders but too small vis à vis what it is the market. There comes a point where there is no reason why these two policies should be under the same Government. The whole concept of the Nation state may be obsolete now and, rather than have a singular jurisdictional that does ev- erything, what we need now is a set of overlapping jurisdictions. For example I am a totally believer about having a sin- gle European market, having a banking regulation and union, having a single currency, and so on; these are powers that should be going up. But, at the same time, I also believe that now, that we don’t need a single Government to protect our markets and protect us from external aggressions, we can devolve powers to the cultural units, because now, Catalonia, for instance, has the ability to enjoy the defence of the whole of Europe, the joined mar- kets of all Europe so it does not need to delegate its identity to the Government in exchange for this protection or these markets. I think that there will be a nat- ural tendency to create supranational institutions and to fragment the State. Why are there so many debt problems around us (in Europe)? Is it because of bad governments? Because of financial markets that give wrong incentives? Because of the lack of efficient policy instruments such as structural reforms? Because of specu- lative strategies? Or because of bad economists? I think it was because of a bad reaction to the financial crisis. The financial crisis initially led to a collapse in revenues. When the bubble collapses, activity collapses, revenues go down a lot on the part of the government. Then, the IMF, and all the authorities said “we think that this is a Keynesian style crisis, we need to have a fiscal expansion”. So, governments started spending a lot. And a lot of this spending was com- pletely in things that are absurd. For example, Spain spent a lot of money on public works that were worthless. And I think this was the case around the union. Then, banks could not sustain themselves and governments started pumping money to the banks. So, if we put together lower revenues, increasing public spending and transfers to the banks, we have a massive public debt. For me all these are decisions and these are bad decisions. So, my answer to your question is because of bad governments, probably, advised by bad economists. But this is another thing. Why did economic models fail to predict the financial turmoil and the subsequent crisis? In what way can we improve them in order to be more prepared in the future? This is an interesting thing that leads to a connection between our knowledge and the world. I write models in which individuals are surprised when crises come, because this is the real world. You asked me at the very beginning of the interview about my research on asset bubbles. My research on asset bubbles models bubbles as stochastic, so there is a probability that they collapse. Whether they collapse or not depends on what we call shocks to investors’ sentiments. What do shocks to investors’ sentiments mean? They are movements in the beliefs of the people that I cannot account for. At the mo- ment in which I have a theory for that, then, everybody will have a theory for that and these moments will not occur, they would be totally predictable. I do not think that with the current state of the theory we can predict these things. What we can do is to do some ad-hoc empirical models that in the past have shown that there are some correlations between some variables and the col- lapse of asset prices or what is investors’ sentiment. Perhaps, these ad-hoc empirical models can be redefined to give us some clue about what to do. But I do not think economics is about that. You cannot predict the weather, you cannot predict a natural disaster. And I think that we cannot predict these changes in prices in the markets. What economics is about is that when this happens to know what to do. To react to it. I cannot tell you whether you are going to be sick or whether you are going to have an accident, but you are going to have a doctor that can fix it. I think this is what economic models should be doing. And I think this is also something in which we have failed. The first failing, of not predicting the crisis, I am not too worried about it, I am not ashamed of that. The second part, of not knowing what to do after it hap- pened is much more serious. Together with many others, I am trying to change that. For the next time. Do you think that the worst period of the crisis has passed and we are moving towards the recovery, and what are the main dangers to watch out for on the path to recovery? Well, I cannot predict whether things are going to turn for the worst or for the better. That is very difficult and I will not get into this because then I will just be guessing. Here what I see is that the fis- cal problems are very much extenuated and the social tensions are much larger. And I think that the probability of a serious social conflict is much higher now than at the beginning of the crisis. If a similar bad shock happens, we are going to be in a much worse situation to handle it. But, I do not know, I really cannot predict this. 12. The decision of the Eurogroup to have "haircuts" on deposits as a con- dition for the Cypriot bail-out shocked depositors all over Euro Area and was criticized by some economists. Do you think that it is possible for this to reoccur in another European country? Are you in favour of such a measure for insolvent banks? I am not in favour and I don’t think that this is going to happen in other coun- tries. I have not followed in detail other than in the press but I think that the Cyprus case was a special case because most of the holders and depositors were foreigners, mostly Russians, and it was a way to make foreigners pay for the European crisis. I am a little bit surprised that European Union and the IMF allowed it.
Herman K. van Dijk is affiliated with the Faculty of Economics and Business Administration, VU University Amsterdam, as professor of econometrics, and also as professor emeritus with the Econometric Institute, Erasmus University Rotterdam. He has been director of the Tinbergen Institute, Director of the Econometric Institute, and professor of Econometrics with a Personal Chair at Erasmus University Rotterdam. He has also been a visiting Fellow and a visiting professor at Cambridge University, the Catholic University of Louvain, Harvard University, Duke University, Cornell University, and the University of New South Wales. He is Fellow of the International Society of Bayesian Analysis, Senior Fellow at the Rimini Center for Economic Analysis, and Honorary Fellow of the Tinbergen Institute. He received the Savage Prize for his PhD dissertation and is listed in the Journal Econometric Theory in the Econometricians Hall of Fame amongst the top ten European econometricians. His research interests cover a range of topics in econometrics with a common theme: Simulation-based Bayesian Econometric Techniques for Inference, Forecasting and Decision analysis. For more information about the author and his research activities you can visit [[not sure if this is too informal]] his website: http://people.few.eur.nl/hkvandijk/ Jan Tinbergen (1903-1994) was awarded the first Nobel Prize in Economics in 1969 together with Ragnar Frisch: “for having developed and applied dynamic models for the analysis of economic processes”.
Tinbergen’s motivation and basic approach to econometrics The desire to combat the socio-economic consequences of the Great Depression of the 1930s was Tinbergen's motivation for using econometric modeling. His approach towards studying periodic economic upswings and downswings contrasted with previous approaches to business cycle research. After a 19th-century undertaking by Juglar (1862) ascribing the recurrent business crises in Europe and North America to credit crises, and Jevons's (1884) study pointing to agricultural production cycles connected with sunspot numbers, several research projects in the early 20th century were devoted to the construction of so-called business cycle barometers. The purpose was to measure economic fluctuations through a particular index (or set of indices) with the aim of giving warning signals for turning points that would lead to a depression. An example was the Harvard Index of Business Conditions, known as the Harvard Barometer, constructed by a team led by Persons (1919). Another well-known descriptive approach to the business cycle during this period was initiated by Mitchell (1913). Mitchell’s work was followed by that of Yule (1927) and Slutzky (1927), who suggested that the cumulative effect of random shocks could be the cause of cyclical patterns in economic variables. Frisch (1933) applied these ideas, introducing econometric models in which impulse propagation mechanisms led to business cycles. However useful it could be as a starting point, Tinbergen criticized descriptive analysis as being too vague for use in policy preparation, and started a quantitatively oriented research program to explore the possible economic causes of the periodic upswings and downswings in economic activity. In an earlier theoretical study, Aftalion (1927) had argued that lags in an economic model could generate cyclical variation in economic activity. Following this argument, Tinbergen specified a first simple case using a system of difference equations to express lagged responses of supply to price changes in a market for a single good. He noted that the systematic fluctuations that could arise in such a system had been observed in an empirical study of the pork market by the German economist Hanau (1928), a phenomenon that became known as the ‘cobweb model’. Tinbergen subsequently generalized the specification of dynamic equations with lagged adjustment processes to macroeconomic settings, arguing that fluctuations in components of national product, such as investment and consumption expenditures, would lead to business cycle fluctuations in general economic activity. In 1936 he published the first applied macroeconometric model for the Netherlands. It was a dynamic model consisting of 22 equations in 31 variables. Employing what we now see as basic statistical techniques like correlation and regression analysis, it was to be used for the analysis of the particularly pressing unemployment problem. The specification of consumption and employment in this model anticipated elements of Keynes's theory (1936). This modeling exercise resulted in a strong policy recommendation in favour of a devaluation of the Dutch guilder to tackle unemployment. But its importance for the economics profession was far more profound: for the first time, the economic-policy debate had been based on empirically tested, quantitative economic analysis and not on rather informally stated economic theory, the so-called verbal approach. Thus, according to Solow (2004, p. 159), Tinbergen's work during this period ‘was a major force in the transformation of economics from a discursive discipline into a model-building discipline’. The Keynes-Tinbergen Debate The formulation of certain relations in Tinbergen's 1936 model showed some resemblance to Keynes's theory. Nevertheless, in an article in the Economic Journal of 1939, Keynes was remarkably skeptical of Tinbergen's work. Keynes labeled Tinbergen's method of estimating the parameters of an econometric model and computing quantitative policy scenarios as ‘statistical alchemy’, arguing that this approach ‘… is a means of giving quantitative precision to what, in qualitative terms, we know already as the result of a complete theoretical analysis’ (Keynes, 1939, p. 560). Their widely diverging views on the relevance of quantitative economic analysis were also illustrated by Keynes's reaction to Tinbergen's estimate of the price elasticity of demand for exports. When, in 1919, Keynes had strongly criticized the excessive war indemnity payments enforced upon Germany after the First World War, his argument had depended critically on the value of this elasticity. Tinbergen empirically found this value to be minus 2, precisely the value that Keynes had assumed a priori in his study. When informed about this Keynes replied: ‘How nice that you found the correct figure’. Keynes' critical attitude towards macroeconometric modeling and analysis originated from his view that the underlying economic theory should be complete in the sense that it should include all relevant variables and set out in detail its causal and dynamic structure. Econometrics could be used only for measuring relations (‘curve fitting’ was the term used); it could not refute economic hypotheses or evaluate economic models. Tinbergen, on the other hand, argued that economic theories cannot be complete. Econometric research could be useful for scrutinizing elements of economic theories and for examining whether one theory describes reality better than another. Further, it could provide the numerical values of the coefficients in dynamic models that determine the cyclical and stability properties of the model, and, by applying a testing procedure of trial and error, it could yield suggestions for an improved specification of dynamic lags. The debate still goes on In this controversy Tinbergen's approach soon gained the upper hand as increasing numbers of economists, especially in the United States, noted its practical results in terms of model construction and verification, including forecasting and policy recommendation in particular for monetary policy. However, Keynes's comments on the role of expectations and uncertainty in macro-econometrics and on specification and simultaneous equation biases remained relevant. Haavelmo (1943) advocated the use of probability theory in bridging the gap between theory and data in business cycle analysis. Later these issues would become the subject of intensive debate and research. The pioneering work by Thomas Sargent and Christopher Sims, Nobel Laureates in Economics in 2011, to construct models that both fit the data and can be used for forecasting and policy is a clear example of the continuing debate. Discussing their contributions is beyond the scope of the present note. I only mention that forecast and policy implications based on their modeling and inference techniques are studied and used by almost all econometricians at the US Federal Reserve System and at European Central Banks. Personal note I met Jan Tinbergen late in his life when I was director of the Tinbergen Institute, www.tinbergen.nl , from 1992-1998 and after his death from 2008-2010. A brief story of my personal experience with Tinbergen shows his interest in that empirical econometric work which may have enormous potential policy implications. In 1994, just a few months before he died, he read my paper on the bimodal distribution of the World’s Income and the very low estimates of the catch-up probability of poor countries with the rich ones, see Paap and Van Dijk (EER, 1998). He saw it as a testimony to his efforts to make development programming of poor countries an important area of theoretical and practical research and he invited me (with my wife) to discuss in more detail my paper on a Saturday afternoon in the Spring of 1994. Regrettably he passed away on the Monday before our meeting, but it shows how actively involved he was in following empirical econometric research that has substantial policy implications until the very last days of his life. This note is an adjusted excerpt from the paper: “Tinbergen, Jan (1903–1994)." By Cornelisse, Peter A. and Herman K. van Dijk, in The New Palgrave Dictionary of Economics. Second Edition. Eds. Steven N. Durlauf and Lawrence E. Blume. Palgrave Macmillan, 2008. The New Palgrave Dictionary of Economics Online. Palgrave Macmillan. 25 March 2008 <http://www.dictionaryofeconomics.com/article?id=pde2008_T000065> doi:10.1057/9780230226203.1710. References can be found in that paper. Selected papers Keynes, J.M. 1939. Professor Tinbergen's method. Economic Journal 49, 558–68. Keynes, J.M. 1940. Comment. Economic Journal 50, 154–6. Tinbergen J., 1939. Statistical Testing of Business Cycle Theories. I: A Method and Its Application to Investment Activity. II: Business Cycles in the United States of America,1919–1932. Geneva: League of Nations. Tinbergen J., 1940. On a method of statistical business cycle research. A reply. Economic Journal 50, 141–54. Over the summer, I spent ten weeks as an intern with the Overseas Development Institute (ODI) in London, working in their Research and Policy in Development (RAPID) department. The department, as the name suggests, examines the links between research and policy, asking such questions as how to promote uptake of research in policy, how to empower policymakers to request the research they need and how to better shape research in order to facilitate uptake. One of the fields to which RAPID has made a significant contribution is that of evidence-based policy. Recently the Department for International Development (DFID) published a How To Note: Assessing the Strength of Evidence (1). In conjunction, the DFID has been pushing for policy briefs, which are visual and short, in the interest of communicating a more digestible message to policymakers. In theory, pushing for simplification is an obvious and positive step. However, the How To Note has generated heated debate from researchers who are concerned that the rigour of their research, and the nuance of their message, are threatened. Amongst other concerns is an uneasiness that assessing studies with different designs that ask different questions and explore different contexts, and reducing the assessment into a score of high, medium or low strength, demands such oversimplification that the true meaning of the individual studies is often lost. Researchers are forced to expose themselves to possible criticism and conflict through making such judgements, of which the end result is more a caricature of the research. As a statistician, one is constantly attempting to find a balance between oversimplifying the picture and emphasising nuance to the point where no picture can be discerned. My immediate inclination, therefore, was to suggest that by gathering more systematic meta-data on the studies included in a systematic review, one would be able to create visual summaries of the information. The researcher could then carefully calibrate the amount of information retained. The result should be visually accessible for policymakers, whilst ensuring that an appropriate amount of information is retained. This became, to some degree, my research project for the summer. I worked on it in preparation for a workshop on an implementation framework for the How To Note, which will be chaired by Louise Shaxson of the ODI. The workshop is an attempt on the part of the DFID to consult with researchers. I gathered a sample of studies from a body of literature, which had been reviewed in the past by ODI staff, on land rights and their effects on rural households. I created a database of meta-data on those studies, and I experimented with certain techniques for visual summary. Along the way, I summarised the literature on conducting systematic reviews and recorded any thoughts that I had on the process, what I struggled with, and where I found possible solutions. I found the process rewarding. I showed that maps can be used to show the geographic clustering of studies and thereby identify potential bias, and that network models can reflect the interactions of the research papers by modelling cross-citations, thereby showing the influence that individual studies have on the body of literature. I also experimented with using multiple correspondence analysis (MCA) to reflect patterns in the results in order to pick up, for example, where research design unduly influences the outcomes of a study. This proved problematic. I ran the technique, and created the visuals, but I was not able to demonstrate that interesting patterns could be determined. I was convinced that quantitative, qualitative and mixed methods studies can be evaluated in the same database. And yet, as I would tailor my database to make sense for one or another study, it would stubbornly become inappropriate for capturing information about another. This leads me to the key point which I want to highlight in this article: where the group of studies is diverse, and the policy outcome to which the synthesis is to contribute is unknown, creating a global template for collecting meta-data using categorical variables becomes impossible. In order for a synthesis to be valuable, the conclusions of the papers need to be reflected – questions such as whether or not a relationship was recorded and, if a relationship was recorded, how strong and in what direction must be addressed. But how to create categories of outcomes when each paper asks a slightly different question, is searching for slightly different relationships, is evaluating more than one question, and while some were asking what the relationship was, others were asking why the relationship occurs? In a recent discussion with Keith Coleman, a public sector strategy consultant, it became clear to me that the problem arises when researchers are unclear about the purpose of the research - a situation that tends to occur because policymakers themselves are unsure about their logical framework. Before evidence is gathered on a certain topic, policymakers need to know what the outcome is that they wish to achieve, and how this research is going to influence the achievement of that outcome. Of course, this is easier said than done in a world of complex, emergent and chaotic policy issues (2). If policymakers could frame the question clearly enough, however, then social scientists should be able to be more selective in their choice of studies, compare papers with diverse research designs, and yet evaluate them all according to the same criteria. The result would be a framework, which would, without distortion, facilitate assessment according to common categories and variables in a database. A strategic outlook on the part of the person commissioning the research is a necessary condition, if not sufficient, for a high quality output from a systematic review. There is scope for improving a researcher’s techniques for synthesis and communication, but without an understanding of the research user’s intent, a truly convincing and clear summary will never be achieved. (1) Available for download on the website https://www.gov.uk. 2 My supervisor and a research fellow at RAPID (2)For more on this see Shaxson, L. 2011. Why wicked issues need nore than Wikis. Seminar at the Centre for International Governance Innovation. Waterloo, Canada. Multiproduct retailing and consumer shopping behaviour, Jorge Florez-Acosta, PhD student at TSE1/29/2014 The retail sector has become of great importance for what it represents for the economic performance of a country[1], its central role in relations with upstream and downstream markets, and for the impact that its anticompetitive practices can have on social welfare. Multiproduct by nature, the retail sector is characterized, on the one hand, by a complex configuration stemming from the multiplicity of forms, formats, products, and pricing and advertising strategies and, on the other hand, by its concentration, with a few large supermarket chains competing with small downtown shops. Whereas the former supply a wider range of products along with a variety of additional services (such as gas stations, shopping malls, restaurants and entertainment areas for kids), the latter generally offer narrower product lines but constitute an alternative to consumers through either higher quality, specialized offerings in the supply of specific categories of products (such as wine shops, vegetable markets, butcheries, etc.) or because prices are considerably lower (such as hard-discount stores). For all these reasons, this sector has been of central attention for policy makers and economic research. A wide variety of topics has been treated (such as retailer competition, vertical relations, consumer product and supermarket choice, switching, search and shopping costs, the effects of new product introduction, the economics of private labels, etc.) from both theoretical and empirical perspectives in the Marketing and Economics literature. Yet many interesting questions remain to be answered. That is why I decided to concentrate my research in this area. One interesting and extensively studied topic is the existence of the so-called private labels or store own-brands. Private labels (hereafter PL) are retailers' own-branded products supplied exclusively in their stores and produced by a separate manufacturer. They are produced as quality-equivalent products respective to regular manufacturer brands usually distributed in the whole territory (also known as national brands, NB). However, consumers often perceive them as of lower quality, which may explain why they are offered at lower prices in average. Evidence shows that PLs are supplied at a 20% lower price in average relative to a quality-equivalent NB (Berges-Sennou et al., 2009). In France, a particular way to promote PL demand common to the different retail chains seems to be loyalty programs. Consumers subscribing to the program, get a loyalty card (carte fidelité) which they have to show every time they pass by the checkouts. This will give them right to benefit from permanent rebates on some selected PL and some other special promotions. My first paper is precisely motivated by this observation: profit-maximizing retailers giving additional rebates on their lower priced own brands. To shed light on this fact, I empirically examine the effects of loyalty programs on PL demand. Loyalty programs (hereafter LP) are present in almost all the retailing markets. Most work the same way: a member who purchases today gets a reward to be used next time she returns to the store (or after she crosses some threshold). Previous researchers have provided several explanations as to why retailers offer such costly programs. They can be summarized in two ways: 1) consumer retention, as they are more likely to come back when there is a promised price reduction; and 2) as a way to exercise market power, in particular, LPs can be used as an explicit discriminatory device as customers must subscribe to the LP to enjoy the benefits. As previously mentioned, the boost of the demand for a specific product or category of products seems to be another motive. The interesting feature is that loyalty rebates are lagged, i.e. the discounts announced today on some products are accumulated as euros or ‘miles’ in customer’s account and after a given time/money threshold is crossed, the acquired amount of money is given back to customers as a purchase coupon to be expended in any of the retailer's stores.[2] Theory shows that even when rebates are lagged to a subsequent period, consumers perceive current prices as lower. Additionally, theory predicts that firms are able to raise prices and get higher overall profits when they introduce loyalty programs. I estimate brand-level demand using discrete-choice methods taking into account household membership to loyalty programs. I use a three-dimensional panel of quantities and prices for up to 13 brands of plain yogurt, purchased from the 6 largest supermarket chains in up to 94 departments of France, weekly in 2006. I also observe some demographic characteristics including household membership to supermarket LPs. In addition to the well documented challenges faced when estimating demand, such as the endogeneity of prices and the dimensionality problems implied by the large number of brands, I also deal with the correlation between membership of the supermarket LP and unobserved supermarket attributes. Results confirm that private labels are, on average, less valued relative to NB. However, I find that the marginal valuation of PL products increases with subscription to the supermarket LP, which confirms the belief that LPs serve as a way to boost store-brand demand. Moreover, when customers subscribe to separate LPs of competing retailers, the expected effects are weaker, i.e. the marginal valuation of PL products decreases with the number of subscriptions and customers are more sensitive to price changes. Another explanation for the retailers’ efforts for making customers loyal is to avoid them doing multi-stop shopping and concentrate all their purchases with only one retailer instead. This is precisely the focus of my second (ongoing) research project (joint with Daniel Herrera, also a Ph.D. at TSE). People with similar characteristics may have quite different shopping preferences. On the one hand, some customers like to concentrate their purchases with only one retailer and take advantage of being loyal (benefit from loyalty rewards, for instance), time saving and the gains in experience when patronizing a particular retailer. On the other hand, some others prefer visiting multiple separate suppliers motivated by reasons such as to get the best deals, the existence of differentiated product lines, or due to some unanticipated event (a dinner party with the need for particular ingredients they do not regularly purchase or because they run out of some staple, say milk, before expected). We then say that two consumers have heterogeneous shopping patterns when they visit a different number of retailers within the same shopping period. Therefore, a consumer who goes to only one retailer within, say, a week will be a one-stop shopper and a consumer visiting several separate suppliers within the same week will be a multi-stop shopper. This heterogeneity in consumers' shopping patterns may be determined by several factors such as preferences, demographics (income, age, household size, location, etc.), information frictions (search costs), differentiated retailers, and the time availability for shopping activities (opportunity cost of time). Theory has introduced a concept that summarizes most of them: shopping costs. They are defined as all the consumers' real or perceived costs of using additional suppliers (Klemperer, 1992). More precisely, according to Chen and Rey (2012a), shopping costs reflect “the opportunity cost of time spent in traffic, parking, selecting products, checking out, and so forth”. Otherwise stated, shopping cost is defined as the opportunity cost of time when shopping. It is an empirical fact that similar consumers may have different shopping patterns. In France, for example, around 85% of households exhibited multi-stop shopping behaviour in grocery shopping in 2005.[3] One can argue that this heterogeneity in shopping patterns could respond mainly to individual preferences, and then the answer to any question regarding such a heterogeneity would be simple: similar consumers do different things just because they have different tastes for shopping. However, shopping costs may account also for the consumer preference for shopping. In fact, a common feature in the literature is that heterogeneous shopping patterns will always exist as long as there is a mix of consumers with heterogeneous shopping costs and retailers with differentiated product lines or specializations. Economic theory has shown that in the presence of shopping costs, some features that would otherwise be considered good from a social welfare perspective can have adverse effects. The introduction of a new product variety, firms competing head-to-head by producing close substitutes and banning below-cost pricing of competitive products are some examples. In fact, according to Klemperer (1992), in a competitive setting with differentiated product lines, retailers may be tempted to undercut prices in order to make one-stop shoppers become multi-stop ones by patronizing different retailers. On the other hand, if competition is head-to-head, by selling products as homogeneous as possible, customers will stay with only one retailer because the benefit of visiting an additional retailer will not compensate the shopping costs. As a consequence, competition is reduced and prices are higher. Klemperer and Padilla (1997) show that in the presence of shopping costs the introduction of a new product can cause a decrease in rivals' profits generating the so-called “indirect business-stealing” effect. A firm offering more varieties than its rivals is attractive for consumers as they can get all the common and new products through a single retailer, rather than staying with the rival who only offers one product or doing multi-stop shopping and paying the extra cost of patronizing several retailers. As a consequence, the retailer introducing a new variety will sell more of all other products in his range making rivals' profits decrease. This leads to the introduction of too many varieties with respect to the socially optimal number. Chen and Rey (2012a, b) show that retailers can exploit the fact that some consumers do like to be multi-stop shoppers to price discriminate. In this sense, they will never want to push competitors out of the market but keep them in instead, in order to be able to extract rents from multi-stop customers, by adopting loss-leading (when competing with smaller specialized rivals) or cross subsidization strategies (when competing against similar rivals). All these striking theory findings and the lack of empirical literature on the topic motivated our interest in the structural identification of consumer shopping costs, with the primary objective of giving empirical support to the widespread assumption that differences of shopping costs explain the heterogeneity in consumer shopping patterns. And, second, to have an empirical tool that allow us to test some theory predictions and policy conclusions. For the future I hope to continue to develop a research agenda on retailing and consumer shopping behaviour. In particular, I hope to work on topics such as information frictions, price advertising and consumer search costs. [1] [1] According to the European Commission (2010), it represents 4.2% of the European Union GDP, has around 17.4 million workers among other indicators. Also, food expenditure represents around 13% on average for European households. [2] [2] Some programs, such as Casino's, work slightly different as they give points (``miles'') to customers according to a predetermined exchange rate, and have a catalogue where members can pick a gift according to the cumulated number of miles. [3] [3] Source: TNS Worldpanel by TNS-Sofres (Kantar). References Berges-Sennou, Fabian, Bontems, Phillip and Requillart, Vincent (2009),“L’impact économique du développement des marques de distributeurs”, Working paper, Toulouse School of Economics, 30 p. Chen, Zhijun and Rey, Patrick (2012a), “Loss-leading as an exploitative practice”, The American Economic Review, Vol. 102, No. 7, pp. 3462-3482. ________ (2012b), “Competitive Cross-Subsidization”, unpublished working paper. European Comission (2010), Retail market monitoring report, http://ec.europa.eu/internal_market/retail/docs/monitoring_report_en.pdf. Klemperer, Paul (1992), “Equilibrium product lines: competing head-to-head may be less competitive”, The American Economic Review, Vol. 82, No. 4, pp. 740-755. Klemperer, Paul and Padilla, Jorge (1997), “Do firms’ product lines include too many varieties?” RAND Journal of Economics, Vol. 28, No. 3, pp. 472- 488. To live is to take risks: Danger is all around us and the most we can do is to slightly reduce our exposure. Certain actions such as buckling your seat belt when driving or drinking only bottled water do in fact reduce the risk of incurring mortal injuries or contracting painful diseases; however, even such simple measures are not costless. Is a morning coffee worth the risk of hot water burns or the risk of spills on the laptop? Every moment of every day, we are making decisions that trade risks against some benefit we incur by accepting them. In my thesis, I attempt to deepen our understanding about how people value certain risks, and shed light on how government policies might nudge individuals towards better choices. If we could and wanted to protect ourselves against identified risks, we would need to incur costs. Unfortunately, we have limited resources, thus, we need to find a system that allows us to minimize the expected impact that those risks may have at the lowest cost. Comparing diverse risk-reduction strategies requires a common metric. One popular approach is to use a monetary metric which facilitates the comparison between costs and benefits. Costs are usually expressed in monetary terms and are relatively easy to measure; generally they are computed using process-based calculations. Benefits are more problematic. In many cases, population-level risk reduction results in probabilistically saving the lives of unidentified individuals. How much is such a thing valued by society? The conventional metric for valuing a reduction in mortality risk in a monetary form is the Value of a Statistical Life (VSL). While the VSL is often misinterpreted, it does not purport to represent the value of an identified individual’s life. On the contrary, it is a measure of how much society is willing to pay to reduce a diffuse but possibly mortal risk. It would be valuable, from a policy perspective, to know exactly how much an individual is willing to pay to reduce a mortal risk to him or herself or to others; unfortunately, we hardly ever do. It is therefore necessary to try to extract this information from the decisions that people make. There are two main ways that this is done in practice: either through revealed or stated preferences. Neither approach is perfect, but both are powerful tools that allow researchers to estimate true preferences. On one hand, the main advantage of the revealed preference approach is that it is based on what consumers are actually choosing. Unfortunately, revealed preference measures face the critique that the effect that is being captured could possibly be confounded with other effects. Identification of the willingness to pay (WTP) to reduce risk is compromised. On the other hand, stated preferences overcome the issue of identification by controlling the decision-making environment but stated preferences are stated; we do not know whether the respondents would really behave as they have stated . Nevertheless, under the right circumstances, these tools can be valuable aids for evaluating or designing policies. VSL models assume that people make decisions based on their individual preferences. What about the risks that we take for others? Empirically, individuals tend to be willing to pay more for risk reductions relating to children. Evidence of this is can be found in the Food Quality Protection Act in the United States of 1996 which requires an additional tenfold margin of safety for children to ensure that they face no risks from pesticide residue in food (Dockins et al. s 2002). Why is this the case? First, while individuals tend to prefer risks that are voluntary (Slovik, 1987), children are generally perceived as involuntary participants in risky activities. Second, there is ambiguity related to the lifetime health risks faced by children, particularly for new or modern threats. Theoretically, an increase in WTP for risk reduction could plausibly stem from ambiguity aversion (Alary et al. 2012). Finally, there is some evidence to suggest that age could affect WTP (Rowe et al. 1995). In general, we assume that parents have the right incentives to care for their children, and therefore accept parents' valuations of their children's health. Of course, this is under the assumption that parents always have the information they need to take the right action, but do they? In my first paper, I use a revealed preference approach to explore a particular risk-reducing action taken by mothers. In 2000, the French government, following a worldwide trend, began a health advisory policy urging French residents to improve their eating habits. The policy, which is still being implemented today, is commonly known as “Manger Bouger”. Embedded within this larger policy, a smaller, lesser known initiative (starting from 2005) targeted fetal neural tube diseases (NTDs). NTDs are potentially deadly conditions which occur when the neural tube is not fully covered by the spinal cord. While NTDs often lead to abortions, the condition is generally not terminal; the consequences for the child include paralysis or severe brain malformation. To reduce the risk of NTDs, mothers need to consume at least 400 micrograms of folic acid (also known as Vitamin B9) on a daily basis for two months before and two months after conception. This amount can be achieved either through naturally occurring folic acid or through supplementary pills. Unfortunately, the NTDs trend did not change after the policy. Over the past decade the yearly NTD prevalence was roughly 1 baby per 1000. Does this mean that the policy did not have any effect? Using a highly detailed household level purchase database, a quasi-experimental setting and state of the art demand estimation techniques on the ready-to-eat breakfast cereal market, my research suggests that targeted women did in fact consume more folic acid after the policy was implemented. This increase was achieved through pill supplements. Regrettably, timing is everything. Supplemental folic acid taken outside the narrow time window will have no effect on fetal NTD risk. Although targeted individuals did consume more folic acid it may seem that they did not consume it at the appropriate moment. This is not surprising since it is very hard to correctly predict the timing of conception. Is there anything else that can be done? Fortifying staple foods is a common practice in France, it is an inexpensive and effective process. Nearly all the baguettes consumed are fortified with some vitamins. However, vitamin B9 is not among them. This omission is due to the plausible secondary effects that B9 can have on individuals aged 50 years and over. There are epidemiological studies linking increased levels of folic acid to the proliferation of some types of cancer. Others studies find an association with decreased levels of cancer proliferation. The bottom line: there is an important level of uncertainty regarding the secondary effects of B9 in some segments of the population. To deal with this uncertainty, I construct a probabilistic model and evaluate the impact of a massive B9 fortification policy in France: an increase in folic acid intake of 400 micrograms or more by the entire population. After taking into account the effects on longevity, health and wealth for children and adults, I conclude that a fortification policy is advised. In the second chapter of my thesis, in joint work with James Hammitt, we conducted a French representative Internet based survey to assess the WTP to reduce risks of fatal disease. The survey was designed to identify how WTP varies with characteristics of the disease (cancer or other diseases, which organs are affected etc.), the latency between exposure and the manifestation of the symptoms (1, 10 or 20 years), and whether the person at risk is the adult respondent, a child or another adult in the respondent's household. We use a latent class estimation technique along with paradata - data on how the survey data was collected - to identify those respondents who have correctly answered the survey. What is considered as answering a survey correctly? What is usually done in the literature is to check if, at least, the respondents are paying attention to characteristics that are hard to grasp. One of those characteristics relates to scope sensitivity. It is conventional that a survey passes the scope sensitivity test if the amount willing to be paid is nearly proportional to the risk reduction. We find that the proportion of respondents that are paying attention to these hard characteristics varies between 20 and 40 per cent. Time spent filling the survey influences heavily the quality of the answers: too little or too much time spent has a negative effect. Moreover, the implied VSL with and without our estimation procedure varies substantially: results range from 6, 10 and 8 million euros per statistical life for respondents, child and other adults respectively, based on the standard technique to 2.2, 2.6 and 1.8 millions euros with the latent class estimation. Although these results are still preliminary, they suggest that the differences in VSL we initially observed are partly explained by respondents answering the survey incorrectly. As compared with the US where the implementation of cost-benefit analyses to select the best projects is the norm, their use in France is still quite restricted. In part, this has limited the scope of WTP studies in France particularly regarding WTP for child risks. There are clear intentions from the French government to begin using cost-benefit analyses more frequently. This development highlights the need to investigate whether established policies are working, as well as to develop reliable French VSLs estimates for ex-ante (or ex-post) project evaluations. |
Archives
Octobre 2016
Categories
Tout
|