I have created a mosaic with gdal_merge.py using the following command:
gdal_merge.py -createonly -separate -co PHOTOMETRIC=RGB -o merged.tif LE07_L1TP_046030_20180810_20180810_01_RT_b3.tif LE07_L1TP_046030_20180810_20180810_01_RT_b2.tif LE07_L1TP_046030_20180810_20180810_01_RT_b1.tif
It seems to create the mosaic no problem. Now, when I run the gdalwarp command to create a viewable image (.tif), I get a red image, even though I am merging the color bands. I am using this command:
gdalwarp --config GDAL_CACHEMAX 512 -wm 4096 -nomd LE07_L1TP_046030_20180810_20180810_01_RT_b3.tif LE07_L1TP_046030_20180810_20180810_01_RT_b2.tif LE07_L1TP_046030_20180810_20180810_01_RT_b1.tif merged.tif
The output of gdalinfo on merged.tif only gives me a red color band, where the other color band values are 0. Does anyone have any ideas? Also, I know I can use gdal_merge to do all of this at once, but I have a memory constraint that prevents me from using gdal_merge to do all of it at once.
gdalinfo output:
Size is 9930, 9315
Coordinate System is:
PROJCS["Albers",
GEOGCS["NAD83",
DATUM["North_American_Datum_1983",
SPHEROID["GRS 1980",6378137,298.2572221010042,
AUTHORITY["EPSG","7019"]],
AUTHORITY["EPSG","6269"]],
PRIMEM["Greenwich",0],
UNIT["degree",0.0174532925199433],
AUTHORITY["EPSG","4269"]],
PROJECTION["Albers_Conic_Equal_Area"],
PARAMETER["standard_parallel_1",29.5],
PARAMETER["standard_parallel_2",45.5],
PARAMETER["latitude_of_center",23],
PARAMETER["longitude_of_center",-96],
PARAMETER["false_easting",0],
PARAMETER["false_northing",0],
UNIT["metre",1,
AUTHORITY["EPSG","9001"]]]
Origin = (-2336089.094018519900000,2701776.184199890100000)
Pixel Size = (30.000000000000000,-30.000000000000000)
Metadata:
AREA_OR_POINT=Area
Image Structure Metadata:
INTERLEAVE=PIXEL
Corner Coordinates:
Upper Left (-2336089.094, 2701776.184) (125d42'39.19"W, 43d59'36.72"N)
Lower Left (-2336089.094, 2422326.184) (124d40'18.81"W, 41d36'34.34"N)
Upper Right (-2038189.094, 2701776.184) (122d 7'21.46"W, 44d46' 3.96"N)
Lower Right (-2038189.094, 2422326.184) (121d11'45.22"W, 42d21'15.67"N)
Center (-2187139.094, 2562051.184) (123d25'35.42"W, 43d11'34.14"N)
Band 1 Block=9930x1 Type=Byte, ColorInterp=Red
Computed Min/Max=0.000,255.000
Minimum=0.000, Maximum=255.000, Mean=27.268, StdDev=43.782
Metadata:
STATISTICS_MAXIMUM=255
STATISTICS_MEAN=27.267951614063
STATISTICS_MINIMUM=0
STATISTICS_STDDEV=43.781663272236
Band 2 Block=9930x1 Type=Byte, ColorInterp=Green
Computed Min/Max=0.000,0.000
Minimum=0.000, Maximum=0.000, Mean=0.000, StdDev=0.000
Metadata:
STATISTICS_MAXIMUM=0
STATISTICS_MEAN=0
STATISTICS_MINIMUM=0
STATISTICS_STDDEV=0
Band 3 Block=9930x1 Type=Byte, ColorInterp=Blue
Computed Min/Max=0.000,0.000
Minimum=0.000, Maximum=0.000, Mean=0.000, StdDev=0.000
Metadata:
STATISTICS_MAXIMUM=0
STATISTICS_MEAN=0
STATISTICS_MINIMUM=0
STATISTICS_STDDEV=0
Is it overwriting the red band each time? If so, is there a way I can select which color band each b*.tif goes to?
Best Answer
So, after much hair being pulled out of my head, I figured out a solution!
There are two ways that I can think of to merge satellite image bands together. The first one is to use
gdal_merge.py -separate
. That's the easiest way to do it. Now, if you are short on memory, there is another way. Here are the steps I used: 1) You merge the bands with the -createonly flag2) Next, use gdalbuildvrt to create an xml.
3) Add the color bands into the new .vrt file. Basically add the following lines after the following text in merged.vrt.
After
band="1">
add<ColorInterp>Red</ColorInterp>
Afterband="2">
add<ColorInterp>Green</ColorInterp>
Afterband="3">
add<ColorInterp>Blue</ColorInterp>
4) now use gdal_translate to merge the bands
Using
gdal_merge.py
without the-createonly
switch would spike the memory usage to 2GB or more. Using the method described here keeps the memory usage around 100MB, which is sufficient for my server.A little more info - I did notice a difference in the
gdalinfo -hist
command when run on a.tif
file created withgdal_merge
vs the method above. Thegdal_merge
tif has a ton of pixels at the beginning bucket, whereas the method above has all of the pixels spread out. I'm thinking the spread out pixels is more accurate, so it might be better this way anyways.