Translate point after rotation relative to different origin

coordinate systemspythonrotationstransformation

I have 200×380 input image and coordinates (63,146) where (0,0) is top-left:

enter image description here

I rotate about the centre some amount of degrees and expand the "canvas" to avoid cropping resulting in larger output image:

enter image description here

How do I calculate the relative position of point after rotation, with respect to the new larger image?

I'm roughly familiar with how to get the point with respect to the original size but I'm not sure how to transform the calculated new point to fit it in the new image. Red dot is failed attempt of implementing solution transforming the images centre to the 0,0 used in the rotation transform:

enter image description here

My math literacy is not fantastic and I apologise for misusing any terms.

Best Answer

Turns out the solution lies in knowing the correct terms to google for.

Put exact implementation here into a function for individual points, using same rotation matrix returned by the image rotate function solved the problem:

def rotate_single_point(point_tuple, rotation_matrix):
    # https://stackoverflow.com/a/38794480/2244284
    # points
    points = np.array([[point_tuple[0],  point_tuple[1]]])
    # add ones
    ones = np.ones(shape=(len(points), 1))
    points_ones = np.hstack([points, ones])
    # transform points
    transformed_points = rotation_matrix.dot(points_ones.T).T
    return transformed_points