Here is a rather simplistic answer because looks like you have tried quite a lot of optimizations, but thought I should mention WMS and Tile Cache here.
If you import your shapefile into a spatial database like PostgreSQL with postGIS, You could use something like GeoServer or MapServer on top of the database table to generate a WMS Layer. Now using SLD (WMS rendering rule format similar to xml). You can define rules to specify which polygons are to be rendered at what zoom levels (or other conditions).
Even if you wanted to display all polygons in one go, your WMS will sure take a long time to render , but you can use it with something like GeoWebCache (comes bundled with GeoServer), to cache the tiles so you can boost your performance.
You can then overlay this WMS layer on top of Google Maps.
**Updates based on your comments
Some good links to start with are:
GeoServer
GeoServer User Manual
Workshop on loading spatial data in PostGIS
GeoWebCache
WMS Layer on Google Maps
This should get you started. Let me know if you get stuck at something specific. Hope this helps you get a solution.
Having come across the same problem, I have develop a rudimentary workaround, which is pretty quick and dirty. It almost halves the KML size but with a cost, it removes of all HTML popups and symbology/style specifications. It is good for pure viewing purposes.
I usually prefer using Generalize tool (requires at least Editor licence) to test various options until I am satisfied with the shapes that I want to process. Usually a value in between 0.5 and 0.8 meters gives me a decent outcome. A warning, the Generalize tool modifies the input data, so the best practice is to take a copy of it beforehand.
After reading the suggestions and doing some research, I have decided to develop a pretty rough python script for KML stripping process by using regular expression (intentionally avoiding XML parsing process). The code is as follows:
import re,os
kml_loc=r"C:\Temp\doc.kml"
f=open(kml_loc)
lines=f.readlines()
f.close()
all_text_0=''.join(lines)
all_text_0=' '.join(all_text_0.split())
# Remove HTML popup and shape styles
all_text_1=re.sub('<description>.*?</description>','<description></description>',all_text_0)
all_text_1=re.sub('<styleUrl>.*?</styleUrl>','',all_text_1)
all_text_2=all_text_1
# Round the coordinates to desired resolution
decimal_places_of_xy_resolution=6
for i in re.findall('<coordinates>(.*?)</coordinates>',all_text_1):
fixed_part=' '.join([','.join(['{0:.{1}f}'.format(float(c),
[decimal_places_of_xy_resolution,0][c=='0']) for c in b.split(',') if c])
for b in i.split(' ') if b])
all_text_2=re.sub(i,fixed_part,all_text_2,1)
# Write bew KML with a new name
new_kml=list(os.path.splitext(kml_loc))
new_kml[0]+='_REDUCTED'
f=open(''.join(new_kml),'wb')
f.write(all_text_2)
f.close()
Best Answer
You may still have 10dp numbers reported, but count the number of nodes in the modified file, and it should be far fewer that the original.
The tolerance unit is whatever the unit of your map data uses. 0.000001° would keep destination paths no more than 0.000001° (
a metre10 cm or so) from their original location. It would be far too small if your data were in metric UTM format.Clarification: My original estimate was off by an order of magnitude. It would have been 0.00001° that would have been around a metre. I say around, as fractions of a degree vary by location, and aren't always equal. For example, where I live, 0.000001° N↔S is roughly 11 cm, but 0.000001° E↔W is only about 8 cm.
OGR only simplifies on fractions/multiples of the layer's unit. If your layer is EPSG 4326, you're going to have to live with fractions of a degree, and perhaps slight variations in tolerance over large objects. If you wish the unit to stay constant, convert to a projection such as UTM, and then the unit you use will be fractions/multiples of metres/feet/US survey feet, etc.