Month: September 2017

As Scaling Effects of Research Productivity Diminish, India Must Step up R&D Investment

[This article was published by The Wire on their website on Tuesday, Sep 27, 2017. The article was written in collaboration with Aniruddha Ghosh, a classmate from LSE. To read the article on The Wire website, click here]

[This article was also published by the LSE South Asia Centre on its blog]

Advanced economies may be facing declining returns on research effort. This makes it all the more imperative that the Modi government gets its house, with regard to R&D, in order.

An employee works inside a laboratory at Piramal's Research Centre in Mumbai August 11, 2014. Credit: Reuters/Danish Siddiqui/Files

An employee works inside a laboratory at Piramal’s Research Centre in Mumbai August 11, 2014. Credit: Reuters/Danish Siddiqui/Files

Demonetisation, the goods and services tax (GST), a structural slowdown, or some other unknown factor – the jury is still out on what has caused the Q1 GDP numbers of 2017 to slide in India. While shocks do impact GDP trajectories in the short term, it is the continuous generation of ideas and R&D that will help fuel long-term GDP growth.

In this regard, a new working paper at the National Bureau of Economic Research (NBER) – a leading economic research organisation based in the US – shows a worrying trend: Sliding research productivity across all major industries in the US despite exponential increases in research effort. What does this mean? More and more research is being required to maintain the same level of economic growth.

India, on the other hand, has been recently on a path of rising total factor productivity growth (TFP). This makes it extremely important for India to step up its R&D spending and improve its R&D infrastructure in order to provide a sustainable path to a long-run economic growth trajectory.

In a letter addressed to Isaac Mcpherson dated August 13, 1813, that eventually became famous in the world of digerati, Thomas Jefferson, founding father and the third president of the US, wrote:

“That ideas should freely spread from one to another over the globe, for the moral and mutual instruction of man, and improvement of his condition, seems to have been peculiarly and benevolently designed by nature, when she made them, like fire, expansible over all space, without lessening their density at any point.”

For economists, Jefferson perhaps characterises the most important feature that an idea possesses: its non-rivalrous nature. In public economics, a good is said to be non-rivalrous if one person’s usage of the good does not diminish simultaneous usage by other users. For example – consider the information contained in the blueprint of an iPhone. Once the display, processor type, recognition system along with a host of other specifications are finalised upon, that blueprint is replicated across all the Apple factories. Replication of the blueprint in an Apple factory in Shenzhen, China does not prevent its counterpart in Bengaluru, India from using it. The idea embodied in the blueprint is thus non-rivalrous.

The property of non-rivalrousness ensures increasing returns to scale for an economy’s production function. It is this non-rivalrousness nature that places ideas at the centre of the theory of economic growth. Amongst many other things, the invention in medicines, telecommunications, space science, digital space all represent ideas that have been, in part, responsible for economic growth over the last two centuries. Quantitatively, US per capita income today is nearly 17 times to that of 1870 levels. During 1870-2017, US GDP per capita has grown at a healthy average rate of nearly 2% per year. Moreover, this income growth has brought about an even higher transformation in living standards of people as reflected by the quality of life indicators vis-a-vis levels two centuries before.

The role of ideas in economic growth has long been a problem which economists have been intrigued with. It was in the 80’s and 90’s, however, that the literature on ideas and economic growth gained traction with a series of writings by Paul Romer, the current chief economist at the World Bank. In a series of papers, he argued on how public R&D spending and subsidies could positively influence the long-run rate of economic growth. Two results of his work are central to our understanding.

Firstly, the rate of economic growth is determined by the stock of human capital and knowledge the economy possesses. Secondly, public R&D is crucial to the development of ideas as private markets generate too little R&D in market equilibrium owing to their non-rivalrous nature. To sum it up, public provision of R&D is a crucial positive determinant of the rate of economic growth. To account for ideas and technological progress in growth models, economists are usually interested in the TFP.

TFP accounts for the growth in output not accounted for by the growth in inputs used for its production and is often synonymous with improvements in the technological state of an economy, and is an important concept in economic growth accounting.

The ‘scale effects’ of R&D spending

However, in his 1995 treatise ‘R&D Based Models of Economic Growth’, Charles I. Jones of Stanford University argued against the then prevailing theories predicting ‘scale effects’ of R&D spending. The ‘scale effects’ literature argues for a proportional relationship between the quantum of resources devoted to R&D and its effect on per capita output growth.

For example, if the number of engineers, researchers and scientists engaged in R&D is doubled, the per capita growth rate of output should also double. However, Jones’ empirical analysis showed how scientists engaged in the US R&D industry had increased from the 1950s to 1990s, but the TFP growth had remained largely constant and seemed to be uncorrelated. Not only in absolute numbers, even the share of labour engaged in R&D had tripled over the same time period despite a nearly constant TFP growth.

Source: R&D Based Models of Economic Growth (1995, C.I Jones)

Figure 1: Scientists and Engineers (S&E’ s) engaged in R&D and U.S TFP growth from 1950-1990. U.S TFP growth has virtually stayed constant but the number of S&E’ s employed has steadily increased.  Source: ‘R&D Based Models of Economic Growth’ (1995, C.I Jones)

Building on Jones’s 1995 work, a new working paper at the NBER looks into the productivity of research effort, that is, how research effort correlates with an increase in output. ‘Are Ideas Getting Harder to Find‘, authored by Nicholas Bloom and Michael Webb of Stanford University, John Van Reenen of Massachusetts Institute of Technology and Jones himself, tries to empirically calculate research productivity in the US both at the micro (industry) level and the aggregate US economy as a whole.

Their findings are worrisome: While the research effort (proxied by the number of effective researchers) is rising at a healthy rate, research productivity is declining sharply both at an aggregate level and in different industries. Moreover, they find a secular rate of decline in research productivity across all the industries and products they investigate across the primary, manufacturing and the service sector. (Figures 2 and 3 below show this)

The key implication, therefore, is that it takes more research effort today to produce the same level of output growth as it did yesterday. The quartet finds that the aggregate research productivity for the US falls at an average rate of 5.3% every year, implying a doubling of research efforts every 13 years in order to maintain the same level of economic growth.

These findings indicate that the US and other major economies of the world today would need to sustain their economic growth levels with even higher levels of research effort. Now, these claims may dampen the spirits of even those who are most optimistic about growth prospects of these countries. But it is important to put some context into the authors’ findings.

Source: Bloom, Jones et.al 2017.

Figure 2. This shows aggregate data on growth and research effort.The idea output measure is Total Factor Productivity (TFP) growth. During the period 1930-2000, TFP growth has been relatively stable or even declining. The effective number of researchers has increased by a factor of more than 23. Source: Bloom, Jones et.al (2017).

 

Figure 3: Aggregate data on research productivity and research effort. Research Productivity is the ratio of idea output, measured as TFP growth, to research effort. During the period 1930-2000, research productivity has fallen by a factor of 41 because of a) stable/declining TFP growth and b)rising research effort.

Figure 3: Aggregate data on research productivity and research effort. Research productivity is the ratio of idea output, measured as TFP growth, to research effort. During the period 1930-2000, research productivity has fallen by a factor of 41 because of a) stable/declining TFP growth and b)rising research effort. Source; Bloom, Jones et.al (2017)

First, US real GDP was a little over $2 trillion in 1950, from which it has grown to over $17 trillion today. As a result, the same magnitude of economic growth today implies a much larger increase in income in absolute terms. Therefore, the fact that higher research effort is required to maintain the same level of economic growth can be explained by the large base the US economy possesses. In this sense, the paper just reaffirms our understanding that as an economy grows, an increasing amount of investments in R&D is needed to push the economy forward.

Second, what the authors measure here is not technically the difficulty of getting new ideas but rather the onerous task of developing them further as they mature. Third, the TFP measure is also a reflection of our measure of ignorance. This ignorance covers many components, some wanted (new technical innovations, higher R&D output), others unwanted (measurement errors, model misspecification). Therefore, to rely on TFP growth as a sole indicator of technological progress may very well be misleading.

Fourth, continuous innovations in technology, artificial intelligence, biotechnology etc may not have shown up on TFP yet but arguably have the potential to put an economy on an upward growth trajectory in the future. Present accounting standards for TFP may very well fall short in capturing the forward-looking dimension of R&D. Therefore, unqualified conclusions from this paper are certainly not recommended.

Is India stagnating?

On the domestic front, India’s public spending on research has been stagnant at around 0.8% of GDP for over a decade. As a major growing economy, we have been happy to piggyback on the innovations of the advanced economies and use them in an Indian setting, despite some exceptions. However, if the authors’ findings are true, then the advanced economies are definitely facing a problem of declining returns to research.

As a result, it is important for India to seize the initiative and start investing in R&D, and develop a robust R&D infrastructure if it wishes to maintain a sustainable and a high economic growth path. While the jury is still out on whether it was demonetisation or the introduction of GST that reflected in the dismal Q1 growth percentage of 5.7 %, it is important that we should not stray away from investing in R&D that will ultimately lead us to a higher long-run growth trajectory. The Modi government’s flagship schemes such as Impacting Research Innovation and Technology and Uchhatar Avishkar Yojana are pivotal to India’s R&D growth story.

The non-takeoff of the Vishwajeet scheme is certainly a setback to the research centres and universities that were seeking financial resources for research.

Figure 4: Total Factor Productivity levels of India, China relative to the United States. Note that these are levels, not growth rates. India’s TFP levels are below those of China and nearly two-fifth to that of U.S. Source: Authors calculations and Penn World Table 9.0

Figure 4: Total factor productivity levels of India, China relative to the United States. Note that these are levels, not growth rates. India’s TFP levels are below those of China and nearly two-fifth to that of U.S. Source: Authors calculations and Penn World Table 9.0

Figure 5: Total Factor Productivity growth of India, China and United States. It is encouraging to note that India has been on a rising TFP path and therefore, R&D merits even more serious attention. Source: Authors’ calculations and The Conference Board

Figure 5: Total factor productivity growth of India, China and United States. It is encouraging to note that India has been on a rising TFP path and therefore, R&D merits even more serious attention. Source: Authors’ calculations and The Conference Board

The present NDA government certainly needs to commit more budgetary increases to the R&D sector if we want our researchers and scientists to be doing more high-end research. Moreover, this spending must crowd-in private investments from the industry. These increases are pivotal to increasing India’s TFP level – though Indian TFP growth has been having an upward trajectory in the last few years, it continues to be only a fraction of the US TFP levels. (Figures 4 and 5 illuminate this above.)

While the nation celebrates the 86th birthday of one of its greatest citizens, A.P.J Abdul Kalam on October 15, it becomes important to recollect what he had to say for the policymakers, “When grand plans for scientific and defence technologies are made, do the people in power think about the sacrifices the people in the laboratories and fields have to make?”

It is essential that India respects these words more generously.

 

Regression 101: Don’t Forget to add all the Ingredients to Bake the Cake (Omitted Variable Bias)

With the advent of new statistical software and tools, conducting a simple cross-sectional regression sounds quite easy. Right? Not really. Specifying an appropriate model is very important and no tool apart from the econometrician’s brain can help her do that. Speaking of brain, let me narrate an interesting anecdote.

Paul Broca was a renowned French neurologist, surgeon, and anthropologist in the 19th century. He was a believer that brain weight correlates to one’s intelligence. Based on the autopsies he had conducted in Paris hospitals, he found that female brains tend to weigh on an average 15% lesser than male brains and concluded that men are more intelligent than women.  While science has subsequently shown that his ideas about brain weight weren’t accurate (elephants’ and blue whales’ brains are several times heavier than human brains), but was his model correctly specified in the first place?

What I am hinting at is the case of an omitted variable bias. This is one of the most basic reasons why the results of a regression may not be relevant. The concept is simple enough – while conducting a regression, if the econometrician forgets to include a relevant explanatory variable in the model then the regression doesn’t give reliable results. In other words, the coefficients that one gets on the explanatory variables are biased/inconsistent and the model is said to have an endogeneity problem.

Allow me to demonstrate:

Suppose the real model is –

y = a + Bx + Cz + e

However, the econometrician forgets about z and instead runs –

y = a + Bx + e

And suppose z can be determined as a function of x in the form of:

z = v + Dx + u

Then the coefficients that the model actually gives out are actually:

y = a + Bx + C(v + Dx + u) + e

Or:

y = (a+Cv) + (B +CD)x + (e+Cu)

As we can clearly see, the coefficients on x are biased. While the true coefficient is B , but what the model would throw up is (B+CD) .

How does this relate to Paul Broca? Well the regression he ran was of the form:

Brain_Weight = a + B(Male) + n

Where Male was a dummy variable which took the value of 1 if the observation belonged to a male, and 0 if the observation belonged to a female. Here, was the differential between male and female average brain weights and he got a positive coefficient on that. However, the model suffered from omitted variable bias. For example, men tend to have more body mass than women. How would this affect the model? The new model now reads:

Brain_Weight = a + B(Male) + P(Body_Mass) + n

Using the earlier logic, we can conclude that the coefficient we get on B is incorrect. So what exactly was the bias on B, the key coefficient, in Broca’s specification?

E(B) = B + P((Cov(Male, Body_Mass)/Var(Male))

The bias is most likely to be positive as:

  • P, brain weight and body mass are positively correlated.
  • Males, on an average, tend to have more body mass than women.
  • Hence the part in red is positive, and the model would overestimate the coefficient of B

Similarly, we can argue that many more relevant variables like Age_at_Death were missed out by Paul Broca (men tend to die earlier than women).

 

Is it always a problem?

Omitting variables is not always a problem. The situations when it’s not a problem include:

  • The omitted variable is uncorrelated with the other explanatory variable.
  • The omitted variable is irrelevant or doesn’t determine the dependent variable.
  • More importantly, it depends on the underlying theoretical framework and question that the econometrician is trying to answer.

A practical example

I have used the BWGHT dataset from Wooldridge to illustrate an example. It has observations with the weight at birth of a baby and some explanatory variables. You can download the dataset here.

Now initially, I conduct a simple regression –

Ln(Weight_at_Birth) = a + B(Family_Income) + e

> lm (df$lbwght ~ df$lfaminc)
Call:
lm(formula = df$lbwght ~ df$lfaminc)
Coefficients:
(Intercept)   df$lfaminc 
    4.69673      0.02061

This shows that the weight at birth of a child is positively correlated with his family income. Fair enough, but is there an omitted variable bias? I go on to include fatheduc variable which is a measure of father’s years of education.

The new model becomes:

Ln(Weight_at_Birth) = a + B(Family_Income) + J(Father’s_Education) + e

Which way would you guess the bias is? So we can say the fatheduc is positively correlated to both the child’s weight at birth and the family education. As a result,  should be biased upward.

> lm(df$lbwght ~ x_more)
Call:
lm(formula = df$lbwght ~ x_more)
Coefficients:
(Intercept)  x_moreLnFamilyincome       x_moreFatheredu  
    4.672692              0.014738              0.003526 

And this is exactly what we find. We can clearly see that the coefficient on Family Income has reduced! (Haven’t focused on the standard errors on purpose here)

Practical Considerations

The practical issue with omitted variable bias is that the econometrician may not be aware that a relevant variable is being omitted, or the data for an omitted variable may simply not exist. For example, when testing the impact of education of wages, one may want to include a variable to control for ability. However, it would be practically difficult to get hold of such a variable.

There is no statistical test to check whether your model has an omitted variable. While RESET Test in Stata does have some functionality with regards to checking if higher order forms of already included variables have been omitted in your model, it doesn’t account for external omitted variables. More on this is here.

Correctly specifying models is much of an art as a science. There are much more such issues that I would love to cover in the days to come. Until next time!

 

References:

  1. Must admit that I was introduced to Paul Broca’s story during my Econometrics lectures by Prof Vassilis Hajivassiliou at the LSE.
  2. BWGHT (2000), Wooldrige data sets.
    Available at: http://fmwww.bc.edu/ec-p/data/wooldridge/datasets.list.html
    (Accessed 15 September 2017)
  3. Gujarati D, and Porter D, and Gunasekar S (2009), Basic Econometrics, 5th ed, New Delhi: Tata McGraw Hill, Pg: 495-498.
  4. Omitted Variable Tests.
    Available at:  http://personal.rhul.ac.uk/uhte/006/ec5040/Omitted%20Variable%20Tests.pdf
    (Accessed 17 September 2017)
  5. Schreider E (1966), Brain weight Correlations calculated from original results of Paul Broca,
    American Journal of Physical Anthropology.
    Available at: http://onlinelibrary.wiley.com/doi/10.1002/ajpa.1330250207/abstract
    (Acessed 16 September 2017)

Old Questions, New Answers: The changing narrative of the great recession

[This article was published by Mint on their website on Tuesday, Sep 12, 2017, at 1:08 PM IST. The article was written in collaboration with Aniruddha Ghosh, a classmate from LSE. To read the article on the Mint website, click here.]

On 15 September 2008, Lehman Brothers Holdings Inc., arguably the biggest victim of the US subprime-mortgage crisis, filed for bankruptcy. This seminal event greatly intensified the ongoing economic crisis in the US and led to the erosion of a record $10 trillion in market capitalization from the global equity markets. With its ninth anniversary round the corner, a new working paper at National Bureau of Economic Research (NBER), a leading economic research organization based in the US, along with independent research work by scholars at Massachusetts Institute of Technology, is giving rise to a competing theory that has challenged the broadly concordant reasons for the Great Recession.

“Like I said, don’t get emotional about real estate, Nash. They’re boxes. You listening? Big boxes, small boxes.” (From a scene in the 2014 Hollywood movie, 99 Homes).

Rick Carver, the devilish real estate operator played brilliantly by Michael Shannon, thunders these lines to his victim-cum-employee Dennis Nash. The movie is a crisp 112-minute description of what the mortgage crisis meant to people across the income spectrum. Turns out, the movie was prescient in its observations and is inspiring research that is challenging the established consensus on what drove the Great Recession.

The widely-accepted consensus of what fuelled credit growth in the run-up to the Great Recession (2007-09) has been largely consistent and has centered on the findings of Atif Mian of Princeton University and Amir Sufi of Chicago Booth School of Business. Over a series of research works, they have found that most of the growth in credit during the early 2000s boom was fuelled by subprime borrowing despite no noticeable income increase and was the primary trigger of the 2007 to 2009 economic recession. While illiquidity of banks, the fall of titans like the Lehman Brothers, and an uncertain public machinery may have had contributed to the downturn, the evidence as shown by Mian and Sufi and other notable works in this space suggest the economic slowdown was an outcome of the highly leveraged household sector unable to keep up with its debt obligations.

Peaking household debt levels and default rates in the run-up to the Great Recession

Source: http://bit.ly/2xVcGxC (Mian and Sufi 2016)

The workings of this debt-fuelled-recession channel, also known as the “credit supply view” in economic parlance, can be summed up this way: from 2002 to 2005, there was a steady expansion in the supply of mortgage credit for home purchases towards households which were previously unable to obtain a mortgage due to poor credit histories. The mortgage credit supply expansion led to a rise in house prices and further pushed the existing homeowners to aggressively borrow against the rise in home equity values through cash-out refinancing and home equity loans.

This explains markedly the substantial increase in the household debt to GDP ratio from 87.5% in Q1 of 2005 to 99.2% in Q1 of 2008. Much of this borrowing belonged to the households of the bottom 80% of the credit score distribution. Only borrowers at the top of the credit score distribution or those with the best credit histories were unresponsive. And then much of the sharp rise in delinquencies and defaults was driven by the lower credit score individuals living in areas where the house price boom and bust was more severe.

In real terms, the counties that experienced a large increase in their debt-to-income ratio before the onset of the downturn were precisely the counties which showed the sharpest decline in durable consumption and the largest increase in unemployment. The top 10% leverage growth counties experienced an increase in the household default rate of 12 percentage points. In contrast, the bottom 10% leverage growth counties experienced an increase household default rate of 3 percentage points. Additionally, counties with the highest leverage growth also had a much sharper and substantial increase in the unemployment rate.

Household Debt to GDP ratio for United States

Source: Federal Reserve Bank of St.Louis (http://bit.ly/2eQv1Ej)

Majority of credit defaults (in value) was mainly driven by the bottom quintile (credit scores) of borrowers

Source: http://bit.ly/2wkxAVt

All this sounds familiar, right? But here is the twist. A new working paper, Credit Growth and the Financial Crisis: A New Narrative, at the National Bureau of Economic Research (NBER), authored by Stefania Albanesi (University of Pittsburgh), Giacomo De Giorgi (Federal Reserve Bank of New York) and Jaromir Nosal (Boston College) along with the works of scholars like Antoinette Schoar (MIT Sloan School of Management), have come up with evidence challenging the established reasons for the Great Recession. These researchers point out to two key shortcomings of the earlier research works authored by Mian and Sufi.

Firstly, the earlier approach concludes that the default crisis was mainly driven by low credit score borrowers or subprime borrowers. In contrast, Albanesi, De Giorgi, and Nosal argue that low credit score individuals tend to be disproportionately young, and thus, the earlier approach is unable to distinguish as to whether the increase in supply of credit to such borrowers was on account of the borrowers being subprime, or if it was just on account of the fact that younger people tend to demand more credit in their life cycle.

Secondly, the earlier approach uses the individual ranking of credit scores from 1996 and 1997, about ten years before the financial meltdown. Albanesi and her co-authors argue that using these scores would overstate the defaults by subprime borrowers as individuals with low credit scores in 1996 and 1997 would have improved their score over the years through income and credit growth. Again, the argument is that many of the low credit scorers in 1996 and 1997 were young at that age and would be expected to have a better credit score in the years to come.

Credit score growth the highest in the subprime segment over a period from 2001 to 2014

Source: http://bit.ly/2wdbK9X (Albanesi, De Giorgi, and Nosal)

In order to deal with these crucial issues, they use an individual’s credit score from two years back, in line with industry practices. They also control for the fact that younger people tend to be disproportionately higher in lower credit score quartile. After accounting for these two factors, the researchers find that credit growth in the years preceding the financial meltdown was concentrated with borrowers in the middle and top of the credit score distribution. This has also been corroborated by research work of Antoinette Schoar (MIT), who finds that middle-class borrowers with good credit scores were the main drivers of the housing default crisis.

To account for this puzzling phenomenon, the trio explores the role of “real estate investors” in this saga. “Real estate investors” are identified as borrowers who hold two or more mortgages. They find that real estate investors played an important role in the rise in delinquencies in the middle-income segment. In fact, the authors strongly note: “… We find that the rise in mortgage delinquencies is virtually exclusively accounted by real estate investors.”

Delinquency share for investors is about 10% across all quartiles until mid-2006. At the onset of the crisis, there is a sharp rise of the investor share of delinquencies, and especially foreclosures, for borrowers in quartiles 2-4 of the credit score distribution. The trend is more dramatic for foreclosures

Source: http://bit.ly/2wdbK9X (Albanesi, De Giorgi, and Nosal)

Moreover, Mian and Sufi had conducted a spatial analysis in the US and showed how localities with the highest proportion of subprime borrowers were the ones that drove credit growth and default rates. Albanesi, De Giorgi, and Nosal find the same results for such localities in their analysis. However, within these subprime localities, their finding is: the borrowers with good credit scores were the ones that actually drove credit growth and defaults. Much of the concurrent research is now citing Albanesi, De Giorgi, and Nosal’s work (NBER) and is arguing on similar lines, challenging the established consensus that has prevailed till now regarding the Great Recession.

At this point, researchers are actively reassessing the role of growth in the supply of subprime credit in the 2001-2006 housing boom and in the 2007-2009 financial crisis. These new findings should inform policy discussions among stakeholders across the aisle. Moreover, it makes it even more important for banks to be prudent with their lending. A rigorous evaluation of both subprime and prime borrowers is a key takeaway of the emerging research.

India’s retail housing market doesn’t possess any systemic risks as per the RBI Financial Stability Report in June 2017. India’s property market sales are expected to grow at a 14% compound annual rate from 2016-20 and 18% from 2020-25, according to Morgan Stanley’s research, which is comparable to China’s 22% compound annual growth rate from 2004-15. Interestingly, according to media reports, about one-third of new home loans (in terms of units) in India’s roughly Rs10 trillion mortgage market have started originating from the low-cost housing segment. The government has started several schemes to meet the housing shortage of 18.7 million across 681 cities. The efforts include the redevelopment of slums, the creation of low-cost housing via public-private partnerships, subsidies for the low-income groups, and a credit-linked subsidy programme for home loans. Financial institutions sense an opportunity as mortgage delinquency rates are at less than 1.5%.

But as Robert Shiller (professor of economics at Yale) puts it, “Speculative bubbles do not end like a short story, novel or play. There is no final denouement that brings all the strands of a narrative into an impressive final conclusion. In the real world, we never know when the story is over.”

Therefore, prudence on the part of bankers, regulators, and all other stakeholders is strongly advisable.

The debate on the causes of the 2007-2009 financial crisis is far from over and our search for the right answers seems like a long but necessary journey.

How to Use Price Indices to Increase your Pocket Money?

[Back during undergrad days, I had come across a neat way to use price indices to increase my pocket money. Had shared this as an article in the college’s economics newsletter. I am posting it here so that it may be able to help any students today!]

How often have you heard that economics is unrealistic or completely useless in your life? I’ll try to bust that myth by demonstrating a very cool tool that I observed and successfully used last year. This goes to the heart of every college student’s problems – a lack of adequate pocket money.

We were learning price indices in our macro economics class (SYBA). One day, we conducted a study on the mill workers of Mumbai and the conditions that led to their massive strikes in the seventies. Curiously, we observed that despite periodic increases in their wages (money income), their real income (income deflated to reflect inflation) had seen a steady decline over a decade owing to a steep rise in the prices of their essential commodities. And since the mills were in no position to either increase wages or invest in new technology – they were forced to go on strike!

I felt like a mill worker myself then. Like any college student, I was dependent on a dole of pocket money from my parents to support myself. I would get a weekly sum of Rs. 300. While the mill workers at least had their wages periodically increased – my pocket money had been frozen solid for the past two years. Heaven knows what my real income was! No wonder I couldn’t afford to go out to the movies anymore.

I decided to take matters into my own hands. Of course, I couldn’t mutiny like the mill workers. I would use the power of economics instead!

I took a basket of my usual commodities, assigned weights according to my usage and measured the price increase from the time my pocket money had been stagnant.

(Table 1) Price Index = Sum(IW)/Sum(W) = 5796/36 = 160.25

Thus, inflation for my basket of commodities had been a whopping 60.25% while my money income remained stagnant! Next, I decided to check what my Real Income was;

So my real income had fallen to a mere Rs. 187 a week. (Table 2) Now that explained a lot!

Table 1 & 2

I made a presentation to my parents about this, adding a simple calculation to show what my pocket money ought to be so that I could enjoy the utility of having a real income of Rs. 300.

MY = (300*160.25)/100 = 480.75

My parents were mightily impressed with the presentation. So impressed, in fact, that they decided to increase it to Rs. 500. This was a cool Rs. 19.25 of extra income, a bonus for any rational economist. Thus, I successfully used economics to get an awesome 2/3rd increase in my pocket money and scored a small victory for economics too! <\p>

Are your pockets feeling empty? You know what to do.

© 2018 Sujan Bandyopadhyay

Theme by Anders NorenUp ↑