If you want to go the R way I would suggest this workflow:
- assign each polygon an unique id / such as the polyid in your example
- perform
sf::st_join(points, polygons)
on your points and polygons objects, with - crucially - your points in the first place of the join; it will produce your points object, enriched by the data columns from the polygons object (such as the unique id / polyid).
- at this step you can drop the geometry (=
sf::st_drop_geometry()
), and do simple aggregation over your time dimension and polygon id, as both are present in your points object; {dplyr}
is your friend here as you are dealing with a regular data frame object
Since an example is worth 1000 of words :) consider this piece of code. It builds on the well known & much loved North Carolina shapefile that ships with {sf}
.
It first produces five polygons of dissolved North Carolinas stacked on top of each other, with polygon id's from 1 to 5.
Then it produces three semi-random points, two with time character 1 and one with 3 (or whatever).
The third step is where the action happens: first the points are spatially joined to polygons (it will be a kind of a cross join, since all points are in NC) and then the result is filtered on the secondary criterion - I am using polygon id = time, which makes no sense but is there only for the sake of example. Then it is a question of group by and summarise / functionally equivalent to select polygon id, count(*) from yer data group by polygon id in SQL speak.
library(sf)
library(dplyr)
# a set of polygons in time
shape <- st_read(system.file("shape/nc.shp", package="sf")) %>% # included with sf package
summarise() %>% # single polygon over entire NC
tidyr::expand_grid(poly_id = 1:5) %>% # sequence of ids from 1 to 5
st_as_sf(crs = 4267) # this information was lost during the expand_grid (tidyr equivalent of cross join)
# a set of points in time
cities <- data.frame(name = c("Raleigh", "Greensboro", "Wilmington"),
x = c(-78.633333, -79.819444, -77.912222),
y = c(35.766667, 36.08, 34.223333),
time = c(1, 1, 3)) %>% # time, to be matched with poly_id
st_as_sf(coords = c("x", "y"), crs = 4326) %>% # CRS of the coordinates
st_transform(st_crs(shape)) # transorm to CRS of the shape object
# this is the action!
cities %>%
st_join(shape) %>% # spatial join only, without the second criterion (yet)
st_drop_geometry() %>% # drop geometry (no longer necessary)
filter(time == poly_id) %>% # filter over the second condition
group_by(poly_id) %>% # group by
tally(name = "point_totals") # summarise the result
# A tibble: 2 × 2
# poly_id time_totals
# <int> <int>
#1 1 2
#2 3 1
Best Answer
Because I think this is a very interesting question and a good use case, I wrote this little processing tool. Copy paste the code to a
.py
file and place it inC:\Users\Username\AppData\Roaming\QGIS\QGIS3\profiles\default\processing\scripts
(respectively Linux/Mac paths). You can then find in your processing toolbox withinscripts -> from gisse -> Count Points in Polygon by Datetime
.A modified version of this script is now also part of the ProcessX-PlugIn you can get in the official QGIS Plug-In repository. You can find it in your processing toolbox in
ProcessX -> Vector - Creation -> Create Timepolygon with Pointcount
. Alternatively you may also want to try Count Features in Features with Condition or Count Points in Polygons With Condition algorithms.It requires:
It does:
The script: