Poverty Mapping Based on Satellite Imagery

Poverty represents one of the greatest limiting factors to human well-being and flourishing. Over 700 million people in the world, including over 350 million children, live in extreme poverty. However, identifying communities with the most extreme poverty and in the greatest need of help is not always easy. Reliable data on economic livelihoods remain scarce in the developing world, hampering efforts to study these outcomes and to design policies that improve them. Traditional approaches to poverty data collection relying on surveys and the staff on the ground are expensive, often infrequent and not easily scalable. Therefore, new, better methods to estimate poverty are needed.

We recently collaborated with one of the largest child protection organizations in the world on a pilot study to build high-resolution poverty maps using satellite imagery and artificial intelligence. Our goal was to explore an approach that would ensure scalability, high spatial and temporal resolution, all while keeping the cost of data collection low.

Industry: Non-profit

Project Category: AI, data analytics, image processing

Lead Member: Prediqscience


Collecting data to estimate poverty levels in developing countries presents multiple challenges. Traditional approaches, such as censuses, or household surveys are costly, time-consuming, limited to a sample of the population, and often conducted infrequently.

Therefore, over the past few decades, researchers have begun to develop different techniques for estimating poverty remotely. Initial work explored the potential of “nightlights” data: satellite photographs taken at night that capture light emitted from Earth’s surface. Since such imagery first became available in the early 1970s, it was evident that wealthy regions tended to shine brightest. Recent studies have found a strong correlation between nightlight luminosity and traditional measures of economic productivity and growth (see: Blumentstock for more details). However, nightlights data are less effective at differentiating between regions at the bottom end of the income distribution, where satellite images appear uniformly dark.

In our project with the client, we used an approach first proposed by Jean et al. (2016), where we combined the nightlights- and daylights images to estimate poverty. Daytime imagery contains visible features - such as paved roads and metal roofs - that make it possible to differentiate between poor and ultra-poor regions. Following Jean and his colleagues, we applied deep learning algorithms to the daytime imagery to extract these features. We used a convolutional neural network to learn the relationship between the daytime satellite images (which are rich in detail) and nighttime images (where light areas are assumed to be wealthy). This approach allowed us to train the neural network to learns those features in the daytime imagery that are indicative of economic activity.

Poverty mapping using night- and daytime satellite images (modified based on Jean et al. Science, 2016).

Results & Client Benefits

Initial work on the new satellite image-based poverty mapping demonstrates that the method allows to explain up to 75% of the variation in local-level economic outcomes using just publicly available data.

This project will allow our client to better serve their beneficiaries by providing an accurate map of the most vulnerable communities, including identifying extreme poverty pockets, many times “hidden” in between wealthier neighborhoods. The method used here can also help more efficiently expand our client's programs to new countries, where they currently don’t operate yet, as it enables them to evaluate opportunities and to better identify priorities, making the best use of the resources available at hand.

Also, as an added benefit, the new method enables estimation of other relevant parameters, in addition to poverty mapping, such as human migration patterns, disaster localization & impact, distribution of natural resources (e.g., including water reservoirs), crop yield estimation, etc.

Technology used:

  • AI algorithms: Convolutional Neural Networks / Transfer Learning

  • Data sources: nighttime satellite images - National Oceanic and Atmospheric Administration’s National Geophysical Data Center (NOAA-NGDC); daytime satellite images - Google Static Maps API; local data (survey-based) - World Bank’s: Living Standards Measurement Surveys (LSMS).

  • Implementation: Python / AWS