Intro:
What I want to do is:
- read in a las file
- crop the las around a specific point in space (probably using a radius)
- generate a mesh with cropped points using poisson
The PDAL pipeline below does not work. I've found two hacky workarounds but I don't understand why they work and would appreciate some help understanding why/how to fix the initial PDAL pipeline.
Code and Data:
Here is a link to my las file on my google drive samplecrop.las (I tried to attach it to this post but the file was too large ~250mb). This is a crop sample of an indoor building scan.
This pipeline produces a bad .ply mesh result.
def generate_mesh(input_file, output_file, point):
pipeline_json = {
"pipeline": [
{
"type": "readers.las",
"filename": input_file
},
{
"type":"filters.crop",
"point": point,
"distance": 5
},
{
"type": "filters.voxelcentroidnearestneighbor",
"cell": 0.01
},
{
"type":"filters.normal",
"knn":8,
"viewpoint": point
},
{
"type": "filters.poisson",
},
{
"type": "writers.ply",
"faces": True,
"filename": output_file
}
]
}
pipeline = pdal.Pipeline(json.dumps(pipeline_json))
pipeline.execute()
point = "POINT(575412.3426281 4149027.48476677 12)"
generate_mesh("C:/Users/dre11620/Downloads/scrop/samplecrop.las", "C:/Users/dre11620/Downloads/scrop/samplemesh.ply", point)
What I initially thought was a bug with PDAL but it seems that my data is only 3 decimal points of precision. This is because of what the scale is set to in the las header. My header:
"offset_x": 575410,
"offset_y": 4149046,
"offset_z": 9,
"scale_x": 0.001,
"scale_y": 0.001,
"scale_z": 0.001,
If I output my las to a txt file using PDAL
"X","Y","Z","Red","Green","Blue"
575400.7600000000,4149024.9100000001,10.2970000000,210.0000000000,217.0000000000,218.0000000000
575400.7600000000,4149024.9090000000,10.2870000000,210.0000000000,217.0000000000,218.0000000000
575400.7600000000,4149024.9100000001,10.2780000000,218.0000000000,215.0000000000,210.0000000000
Here is what the output from the PDAL np pipeline array looks like
[(575400.76 , 4149024.91 , 10.297, 121, 1, 1, 0, 0, 1, 0., 0, 0, 7033.39986839, 210, 217, 218)
(575400.76 , 4149024.909, 10.287, 116, 1, 1, 0, 0, 1, 0., 0, 0, 7033.39987044, 210, 217, 218)
(575400.76 , 4149024.91 , 10.278, 114, 1, 1, 0, 0, 1, 0., 0, 0, 7033.3998725 , 218, 215, 210)
Workaround #1:
I was able to find a workaround by running this pipeline:
def crop_point_cloud(input_file, output_file, point):
pipeline_json = """
{
"pipeline": [
{
"type": "readers.las",
"filename": "%s"
},
{
"type":"filters.crop",
"point": "%s",
"distance": 5
},
{
"type":"writers.las",
"forward": "all",
"filename":"%s"
}
]
}
""" % (input_file, point, output_file)
pipeline = pdal.Pipeline(pipeline_json)
pipeline.execute()
point = "POINT(575412.3426281 4149027.48476677 12)"
crop_point_cloud("C:/Users/dre11620/Downloads/scrop/samplecrop.las", "C:/Users/dre11620/Downloads/scrop/samplecrop_smaller.las", point)
Then I read that las file with laspy
and outputted its contents to a txt file:
LAS_PATH = "C:/Users/dre11620/Downloads/scrop/samplecrop_smaller.las"
las = laspy.read(LAS_PATH)
points = np.column_stack((las.X, las.Y, las.Z, las.red, las.green, las.blue))
with open(r"C:\Users\dre11620\Downloads\scrop\samplecrop_smaller.txt", "w") as file:
for point in points:
file.write(f"{point[0]} {point[1]} {point[2]} {point[3]} {point[4]} {point[5]}\n")
Then taking that point cloud txt file, I manually imported it into mesh lab as a point cloud, then manually ran Poisson to generate this mesh you see here
This works… but its incredibly inconvenient and would be great if I could just have one PDAL pipeline to do all of this.
Workaround #2:
This second workaround involves cropping via pdal, writing to a new las, then reading in the new las with laspy, then using a seperate pipeline to generate the mesh. Example:
Cropping with pdal pipeline:
def crop_point_cloud(input_file, output_file, point):
pipeline_json = """
{
"pipeline": [
{
"type": "readers.las",
"filename": "%s"
},
{
"type":"filters.crop",
"point": "%s",
"distance": 5
},
{
"type":"writers.las",
"forward": "all",
"filename":"%s"
}
]
}
""" % (input_file, point, output_file)
pipeline = pdal.Pipeline(pipeline_json)
pipeline.execute()
point = "POINT(575412.3426281 4149027.48476677 12)"
crop_point_cloud("C:/Users/dre11620/Downloads/scrop/samplecrop.las", "C:/Users/dre11620/Downloads/scrop/samplecrop_smaller.las", point)
Separate pipeline using laspy as the reader and generating the mesh:
import numpy as np
import laspy
def load(filepath):
las = laspy.read(filepath)
return las.points.array
import pdal
import json
pipeline_json = {
"pipeline": [
{
"function": "load",
"filename": "loadlas.py",
"fargs": "C:/Users/dre11620/Downloads/scrop/samplecrop_smaller.las",
"type": "readers.numpy"
},
{
"type":"filters.normal",
"knn": 8,
"viewpoint": "POINT(575412.3426281 4149027.48476677 12)"
},
{
"type": "filters.poisson",
},
{
"type": "writers.ply",
"faces": True,
"filename": "C:/Users/dre11620/Downloads/scrop/smallermesh.ply"
}
]
}
pipeline = pdal.Pipeline(json.dumps(pipeline_json))
pipeline.execute()
Conclusion:
Can anyone take 5-10 mins to poke around at my LAS file and see if you can figure out why this pipeline is not working or why these workarounds worked but not the initial pipeline.
Best Answer
I had a similar problem in the past. The problem was that "writer.ply" didn't write the mesh coordinates with sufficient precision, which led to points being overlaid, resulting in this "stair-step" effect.
I solved the problem by configuring the writer as follows:
Unfortunately, it doesn't work in your case. There are still overlapping dots in the upper part of the mesh (see figures). You should submit this problem to the PDAL bug tracker.
With "storage_mode" as "big endian":
With "storage_mode" as "ascii" (default value):