I have two GeoTiff
images that will represent training datasets classes that will be fed to a supervised classifier
.
As opposed to this classification example where the geographic regions are extracted from a fusion table
and the overlaid on the image, I don't have a fusion table so I'm going to use an image with its values directly as a training dataset.
My question is how can I transform an Image
object into an array (or a FeatureCollection
if I follow the example linked above)?
Edit:
I've managed to convert my two training datasets rasters into a KML
vector, and from there I've uploaded them to Google Fusion Tables
(See in the code below class1
and class2
). The two raster images to train and classify (i.e. image1
and image2
in the source code) were uploaded as assets; while the first one was used to train the classifier
, the second one has to be classified. The problem with the code below is that all the pixels in the second image were classified in the same class (class
= 0):
// load vector masks from fusion tables
var class1 = ee.FeatureCollection('ft:xxx');
var class2 = ee.FeatureCollection('ft:yyy');
var classes = class1.merge(class2);
// load training image and the image to classify
var image1 = ee.Image('aaa').select('b1');
var image2 = ee.Image('bbb').select('b1');
// train the classifier
var training = image1.sampleRegions({
collection: classes,
properties: ['class'],
scale: 1
});
var classifier = ee.Classifier.svm().train(training, 'class', ['b1']);
var classified_image2 = image2.classify(classifier);
// show classified image
Map.addLayer(classified_image2, {}, 'Classified image');
Best Answer
Assuming you combine your two training images into one training image (in this combined image, each pixel can be one class, and classes are represented by consecutive integers starting from 0), you then add predictor bands and
sample()
the stacked image.