I have an SRTM elevation map. How do I get the elevation of each point in the map? I am using GRASS. Is there a GRASS script to do such?
[GIS] How to get the elevation of the areas in a raster file from SRTM
elevationgrassrastersrtm
Related Solutions
Data format
I'll take it as a little exercise in how to program a data reader. Have a look at the documentation:
SRTM data are distributed in two levels: SRTM1 (for the U.S. and its territories and possessions) with data sampled at one arc-second intervals in latitude and longitude, and SRTM3 (for the world) sampled at three arc-seconds.
Data are divided into one by one degree latitude and longitude tiles in "geographic" projection, which is to say a raster presentation with equal intervals of latitude and longitude in no projection at all but easy to manipulate and mosaic.
File names refer to the latitude and longitude of the lower left corner of the tile - e.g. N37W105 has its lower left corner at 37 degrees north latitude and 105 degrees west longitude. To be more exact, these coordinates refer to the geometric center of the lower left pixel, which in the case of SRTM3 data will be about 90 meters in extent.
Height files have the extension .HGT and are signed two byte integers. The bytes are in Motorola "big-endian" order with the most significant byte first, directly readable by systems such as Sun SPARC, Silicon Graphics and Macintosh computers using Power PC processors. DEC Alpha, most PCs and Macintosh computers built after 2006 use Intel ("little-endian") order so some byte-swapping may be necessary. Heights are in meters referenced to the WGS84/EGM96 geoid. Data voids are assigned the value -32768.
How to proceed
For your position, 50°24'58.888"N 14°55'11.377"E, you already found the correct tile, N50E14.hgt. Let's find out which pixel you are interested in. First latitude, 50°24'58.888"N:
24'58.888" = (24 * 60)" + 58.888" = 1498.888"
arc seconds. Divided by three and rounded to the closest integer gives a grid row of 500. The same calculation for longitude results in grid column 1104.
The quickstart documentation lacks information about how rows and columns are organized in the file, but in the full documentation it is stated that
The data are stored in row major order (all the data for row 1, followed by all the data for row 2, etc.)
The first row in the file is very likely the northernmost one, i.e. if we are interested in row 500 from the lower edge, we actually have to look at row
1201 - 500 = 701
from the start if the file. Our grid cell is number
(1201 * 700) + 1104 = 841804
from the start of the file (i.e. skip 700 rows, and in the 701st one take sample 1104). Two bytes per sample means we have to skip the first 1683606 bytes in the file and then read two bytes in order to get our grid cell. The data is big-endian, which means that you need to swap the two bytes on e.g. Intel platforms.
Sample program
A simplistic Python program to retrieve the right data would look like this (see the docs for use of the struct module):
import struct
def get_sample(filename, n, e):
i = 1201 - int(round(n / 3, 0))
j = int(round(e / 3, 0))
with open(filename, "rb") as f:
f.seek(((i - 1) * 1201 + (j - 1)) * 2) # go to the right spot,
buf = f.read(2) # read two bytes and convert them:
val = struct.unpack('>h', buf) # ">h" is a signed two byte integer
if not val == -32768: # the not-a-valid-sample value
return val
else:
return None
if __name__ == "__main__":
n = 24 * 60 + 58.888
e = 55 * 60 + 11.377
tile = "N50E14.hgt" # Or some magic to figure it out from position
print get_sample(tile, n, e)
Note that efficient data retrieval would have to look a little more sophisticated (e.g. not opening the file for each and every sample).
Alternatives
You could also use a program which can read the .hgt files out of the box. But that is boring.
This is by no means an answer to your specific problem but an explanation of elevation data and vertical accuracy. Also, this all wouldn't fit in a comment.
Only a select number (depending of area of lidar coverage) of elevation control points are used when checking the vertical accuracy of lidar data. I wouldn't expect all elevation markers to match elevation data, but 1-3 feet does seem more than would be expected. However, consider USGS Base Specification absolute vertical accuracy standards associated with varisous quality levels. This document is our bible when calibrating and classifying lidar point clouds, as well as generating derivatives such as DEMs for the USGS.
QL3 raster data has a minimum cell size of 2m. If you are using a 3m NED DEM, we can consider this a QL3 raster. The minimum vertical accuracy standards for a QL3 lidar dataset is an RMSEz of less than or equal to 20cm (7.9in), and an NVA at 95th percentile confidence level of less than or equal to 39.2cm (15.4in). This meaning that you can be 95% confident that any lidar point is 39.2cm above or below it's real world location.
Also, consider the raster as a data structure. The raster aggregates points to a regular grid, and represents that aggregate area (3 square meters in the NED case) as a single value, usually where the centroid falls on the TIN that was used to create the DEM. So if you are comparing a raster value to a single point, especially one on a slope, consider these things that introduce error into the data (remember no data is without error). I am sure there are vertical accuracy standards for the SRTM data as well. The pros and cons of elevation data should be considered in any sort of analysis.
Best Answer
You can also query with r.what or export the map to an ASCII matrix with r.out.ascii.