I disagree that "scalar multiplication is only a nice-to-have shortcut", or that is "superfluous conceptually". In fact the very definition of a vector space $V$ requires there to be a scalar multiplication.
After that comes the concept of a linear transformation, which again requires the scalar multiplication to be defined. Matrix multiplication is a convenient way to represent these, and that too only in case of finite dimensional vector spaces (or certain infinite dimensional vector spaces).
So defining matrix multiplication and then saying that scalar multiplication is a special case is putting the cart in front of the horse in my opinion.
This is an idea to solve the problem. I didn't test it. So, don't ask me to elaborate.
Notations:
The discretized first curve is defined by the points:
$\quad (X_1,Y_1),(X_2,Y_2),...$
These points are supposed to be close to an unknown function $Y(x)$.
The discretized second curve is defined by the points:
$\quad (x_1,y_1),(x_2,y_2),...$
These points are supposed to be close to an unknown function $y(x)$.
it is supposed that those functions are related by an $x$ shift and an $y$ stretch so that
$$y(x)=a\:Y(x-b)$$
where $a$ is the unknown stretch factor and $b$ is the unknown shift parameter.
Proposed method of calculus :
Consider the Fourier transform of the functions $Y(x)$ and $y(x)$, respectively
$$G(\omega)=\mathscr{\LARGE{F}}\big(Y(x);\omega\big)$$
$$g(\omega)=\mathscr{\LARGE{F}}\big(y(x);\omega\big)$$
Numerical calculus of the Fourier transforms will lead to the transform data, respectively :
$$(G_1,\omega_1), (G_2,\omega_2), ... \quad\text{ and }\quad (g_1,\omega_1), (g_2,\omega_2), ...$$
where the $G$ and $g$ are complex numbers.
It is known that
$$\mathscr{\LARGE{F}}\big(Y(x-b);\omega\big)=e^{i\,b\,\omega}\mathscr{\LARGE{F}}\big(Y(x);\omega\big) $$
The Fourier transform of $y(x)=a\:Y(x-b)$ is :
$$g(\omega)=a\,e^{i\,b\,\omega}G(\omega)$$
$$g(\omega)=G(\omega)a\left(\cos(b\omega)+i\,\sin(b\omega)\right)$$
Separating the real and imaginary parts, with the above data, a non-linear regression leads to approximate values of the parameters $a$ and $b$.
IN ADDITION:
I made a few tests of the above method. For example, with the same generative function that Martin used. But the simulated scatter isn't exactly the same because all softwares have not the same random function. So, the curves red and blue on the figure below are not the exact copy of the Martin's graph.
The resulting stretched and translated curve is drawn in black (this is the red cuve transformed to approximately fit to the blue curve).
The computed stretch factor $0.868$ is to be compared to the theoretical $0.9$
The computed shift on x-axis $-0.413$ is to be compared to the theoretical $-0.5$ which isn't very good.
It appears that the result is very sensitive to the level of scatter of the original data. The method seems rather robust for the computation of the stretch factor. But very bad about the x-shift computation.
As a consequence, on practical viewpoint, this method is too sensitive to the scatter of data and not robust enough. I don't advise to use it until serious improvement would be achieved. This probably would require a lot of work.
Best Answer
Think of the curve and its transformation as mappings from one space to another. In your case, the curve is a mapping of the real line $\mathbb R$ to some subset of the plane $\mathbb R^2$. Every real number corresponding to a value of the parameter in the domain is mapped to an ordered pair $(x,y) \in \mathbb R^2$. We can express such a parametrization componentwise; e.g., $$\gamma : \mathbb R \mapsto \mathbb R^2, \\ \gamma(t) = (x(t), y(t)).$$
A transformation $T$ of the plane to itself can be expressed as some function $$T : \mathbb R^2 \to \mathbb R^2 \\ T(x,y) = (u(x,y), v(x,y)),$$ again as some componentwise operation. Then the transformation of $\gamma$ under $T$ is simply the composition of mappings and is a mapping from $\mathbb R$ to $\mathbb R^2$: $$T(\gamma(t)) = (u(x(t), y(t)), v(x(t), y(t))).$$
In a sense, the mappings $\gamma$ and $T$ differ only in the dimensions of the spaces involved. We can more generally talk about arbitrary mappings from $\mathbb R^m \to \mathbb R^n$, and compositions of such mappings.