UPDATE (20170419)
Here in the future, most tasks of relatively low complexity can be resolved by SLOP (Several Lines Of Python) in the Python console rather than hackily copying tables across to public
.
Bear in mind that if you copy tables across, you need to remember to copy the changed table back to its original schema: more things to remember is always a bad thing.
Because it's now the future, I get the exact layer that I want - regardless of schema - by opening the Python console and entering the right parameters into GetArbitraryPostGIS.py
, which looks like
schema="{schema_name}"
table="{table_name}"
geom="{geom}"
pk="{pk}"
where="{filters}"
uri = QgsDataSourceURI()
uri.setConnection("[h]", "[p]", "[db]", "[u]", "[pw]")
uri.setDataSource(schema,table, geom, where,pk_field)
uri.setSrid("[4283]")
vlayer = QgsVectorLayer(uri.uri(), "{Layer Name for QGIS}", "[postgres]")
QgsMapLayerRegistry.instance().addMapLayer(vlayer)
if not vlayer.isValid():
print 'Layer not loaded', uri.host()
Where {}
indicates a layer-specific parameter, and []
indicates a hard-coded variable.
A dozen lines: I can get a layer in from public
faster than the Add PostGIS
route, by
- opening the console,
- entering the required
{}
parameters into the
script, and
- hitting 'Go'.
-- The malakies
in my answer from the dinosaur days still holds, when considering QGIS 'native' functionality, so I'll leave it intact below --
It seems that it is not possible to restrict Layer->Add Layer->Add PostGIS layers
to a subset of the schemas (schemae? TNIRN) except if that subset is schema = public
. The restriction to only load tables in the public schema, is in the connection information ("only look in the public schema" checkbox).
The solution I have hit on is to copy the table/s that I want to add to the public
schema, and restrict the Layer-add process to the public layer.
This is fine for my purposes, because the tables with the desired target geometries are temporary... usually when I want to add and drop layers rapidly, it's because I want to 'eyeball' the results of some aspatial filtering prior to implementing it on the bigger datasets.
Obviously for tables that are going to be permanent, this is not a very good option - but for quick-n-dirty eyeballing, it's good enough: reduces the load time from ~7 minutes to ~8 sec (I have almost nothing except spatial_ref_sys in my public
schema).
Best Answer
The way you structure your data should typically reflect the way you intend to use it.
If it’s the backend of an application and you need it to store transactional data, you would expect large numbers of tables as you want your schema to be normalized to avoid duplication across tables.
If you’re using it for analysis purposes (as hinted at with QGIS), it might be worth your time de-normalizing your schema into a smaller number of tables. You can Google for data warehousing, and star schema approaches for more on this.
Ultimately though, the variable you should be optimizing for is time. If it takes longer for you to restructure your database than the time it takes to write out complicated queries, IMHO stick with what you have.