Google Earth Engine – Moving Image to Array for Sklearn

classificationgoogle-earth-enginegoogle-earth-engine-python-apimachine learning

I've produced a very large multiband image in EE with the goal of classifying it using the classifiers implemented in sklearn (the native ones implemented in EE don't provide enough flexibility for my purposes). sklearn uses 2-D arrays, so minimally I would need to convert each band to a 2D array and feed them in separately as explanatory variables. That's all fine.

Here's my problem: With a raster covering >150k km2, it is beyond tedious and cumbersome to Export.image.toDrive for each band, only to then re-import them to a python environment using rasterio. Ideally there would be some way to convert EE image objects to sklearn-readable NumPy arrays directly using the EE Python API (Google seems to tease as much with their documentation touting the advantages of using EE in Colab: "Seamless integration with Python data science libraries").

Is there a straightforward way to do this that I'm missing?

Best Answer

Ideally there would be some way to convert EE image objects to sklearn-readable NumPy arrays directly using the EE Python API.

ee.Image.sampleRectangle() does this.

However, there is a limit of 262144 pixels that can be transferred. The interactive data transfer limit is in place to protect your system from hanging (it is easy to request terabytes of data without realizing it).

So in the case of a large area, your options are to export images to Google Drive or Google Cloud Storage and then import to Earth Engine Python API. Using Google Colab makes this easy - EE is installed by default and there is integration with GDrive and GCS. The Earth Engine batch task export methods are better equipped for dealing with large data (breaks up large exports into manageable sized GeoTIFFs).

Even though ee.Image.sampleRectangle() may not be useful for your application, here is a demo in case it helps others.

The following Python script transfers three Landsat 8 bands for a rectangular region to the Python client and converts the EE arrays to numpy arrays and then stacks the arrays and displays the 3-D array as an RGB image representation of the region.

IPython Notebook

import ee
import numpy as np
import matplotlib.pyplot as plt

ee.Authenticate()
ee.Initialize()


# Define an image.
img = ee.Image('LANDSAT/LC08/C01/T1_SR/LC08_038029_20180810') \
  .select(['B4', 'B5', 'B6'])

# Define an area of interest.
aoi = ee.Geometry.Polygon(
  [[[-110.8, 44.7],
    [-110.8, 44.6],
    [-110.6, 44.6],
    [-110.6, 44.7]]], None, False)

# Get 2-d pixel array for AOI - returns feature with 2-D pixel array as property per band.
band_arrs = img.sampleRectangle(region=aoi)

# Get individual band arrays.
band_arr_b4 = band_arrs.get('B4')
band_arr_b5 = band_arrs.get('B5')
band_arr_b6 = band_arrs.get('B6')

# Transfer the arrays from server to client and cast as np array.
np_arr_b4 = np.array(band_arr_b4.getInfo())
np_arr_b5 = np.array(band_arr_b5.getInfo())
np_arr_b6 = np.array(band_arr_b6.getInfo())
print(np_arr_b4.shape)
print(np_arr_b5.shape)
print(np_arr_b6.shape)

# Expand the dimensions of the images so they can be concatenated into 3-D.
np_arr_b4 = np.expand_dims(np_arr_b4, 2)
np_arr_b5 = np.expand_dims(np_arr_b5, 2)
np_arr_b6 = np.expand_dims(np_arr_b6, 2)
print(np_arr_b4.shape)
print(np_arr_b5.shape)
print(np_arr_b6.shape)

# Stack the individual bands to make a 3-D array.
rgb_img = np.concatenate((np_arr_b6, np_arr_b5, np_arr_b4), 2)
print(rgb_img.shape)

# Scale the data to [0, 255] to show as an RGB image.
rgb_img_test = (255*((rgb_img - 100)/3500)).astype('uint8')
plt.imshow(rgb_img_test)
plt.show()
Related Question