Poverty is the common denominator for developing nations, usually defined against an arbitrary poverty line. Individuals or countries below it are considered ‘poor’ and those above it, not so much. But this isn’t necessarily the only way of measuring poverty, and it’s not the best way by any stretch. A team of researchers are now suggesting the use of machine learning to define what poverty really means in different contexts.
Researchers at Aston University argue that mainstream thinking regarding poverty is outdated, putting too much emphasis on subjective notions of basic needs and failing to capture the full complexity of how people use their incomes. That’s why they are calling for a new model, using computer algorithms instead.
There are three mainstream ways of looking at poverty. In the first approach, poverty is defined as the deficiency in the level of living, measured through insufficient consumption of the essential commodities for low-income persons, who spend all their income on essential commodities. This was the approach used by 19th-century economist Ernst Engel, for instance.
The second and more commonly used poverty approach depends on a poverty line that is obtained independently and exogenously. This trend started in 1901 with sociologist Seebohm Rowntree, who defined poor as individuals with income below the poverty line level of income needed to cover basic needs.
But no matter how you go about it, arbitrariness is embedded in the definition of the poverty line. While the U.S. poverty line is measured at three times the money needed to buy a low-income diet plan outlined by the US Department of Agriculture, the Indian poverty line is determined by assuming a particular percentage of urban poor.
There is also a third approach, now an accepted wisdom of multidimensional poverty. But it also requires multiple subjectively determined thresholds (poverty lines). But both this one and the previous two have several drawbacks, as they don’t link poverty to the wider economic system that generates this poverty, the researchers argue.
“No-one has ever used machine learning to decode multidimensional poverty before,” said lead researcher Dr. Amit Chattopadhyay of Aston University’s College of Engineering and Physical Sciences in a statement. “This completely changes the way people should look at poverty.”
In their study, Chattopadhyay and his team looked at 30 years’ worth of data from India and divided expenditure into three broad categories of basic food such as cereals, other food including meat, and non-food covering other spendings such as housing and transport costs. This can be applied to any country and social situation.
There’s a “push-and-pull” relationship between the three categories, the researchers argued, as more spending in one means a reduction in another, for example. Acknowledging this allows for a more holistic measure of poverty that can be adjusted to the circumstances of specific countries as they did with India.
Using used datasets on incomes, asset and commodity markets from the World Bank and other sources, Chattopadhyay and his team created a mathematical model. They were able to accurately predict past poverty levels in both India and the US, but also to predict future levels based on certain economic assumptions.
The model revises the number of people traditionally considered “poor” into a more practical “middle class,” considering the elasticity of supply and demand in the market. It can be used on a national level but also on sub-regions or even scaled down to a single city or neighborhood depending on the available data.
“Current thinking on poverty is highly subjective, because ‘poverty’ will mean different things in different countries and regions,” said Chattopadhyay. “With this model, we finally have a multi-dimensional poverty index that reflects the real-world experience of people wherever they live and largely independent of the social class they are deemed to belong to.”
The study was published in the journal Nature Communications.