Let $f=f(x), g=g(x)$, then as you've mentioned in the post,
$$\frac{f'g-fg'}{g^2} = \left(\frac{f}{g}\right)'$$
So
$$f'g-fg'=\left(\frac{f}{g}\right)' g^2 = c \in \mathbb{C} \setminus \{0\}$$
We can further manipulate this equation
$$\begin{align*}
\left(\frac{f}{g}\right)' g^2 &= c \\
\left(\frac{f}{g}\right)' &= \frac{c}{g^2} \\
\frac{f}{g} &= \int \frac{c}{g^2} dx + k \\
f(x) &= g(x) \left(\int \frac{c}{g(x)^2} dx + k\right)
\end{align*}$$
Where $k \in \mathbb{R}$ is any constant.
These were equivalent transformations, which means we have obtained a formula for $f$ using only $g$.
Therefore, for any function $g$, for which $\frac{1}{g^2}$ is integrable, we have obtained the all pairs of solutions
$$\boxed{\left(\underbrace{g(x) \left(\int \frac{c}{g(x)^2} dx + k\right)}_{f(x)}, \ g(x)\right), \ \ k\in\mathbb{R}}$$
Meaning that the above formula can generate all solutions for any input function $g$ (for which $\frac{1}{g^2}$ is integrable).
Examples (with $k=0$):
- $g(x)=x$, then we obtain $f(x)=cx\int\frac{1}{x^2}dx=-c$. It is easy to see that $f'g-fg'= c$.
- $g(x)=x^2$, then we obtain $f(x)=cx^2\int\frac{1}{x^4}dx=-\frac{c}{3x}$. Once again, it is easy to see that $f'g-fg'= c$.
- $g(x)=\frac{1}{\sqrt{\sin(x)}}$, then we obtain $f(x)=-\frac{c\cos(x)}{\sqrt{\sin(x)}}$. For this case, I have provided a proof that $f'g-fg'= c$ using WolframAlpha.
This means that the "general form of $f$ and $g$ that solve $f'g-fg'=c$", is "anything", as at least one of $f$ or $g$ can be any function (as long as the reciprocal of its square is integrable - but for our most common functions, that holds). However once one of $f$ or $g$ is decided, the other function only has one degree of freedom (represented by $k\in\mathbb{R})$.
Best Answer
Recall that the power rule for differentiation states:
Also recall that differentiation is linear in the sense that $$\frac{d}{dx}[f_1(x)+f_2(x)+\cdots+f_k(x)]=\frac{d}{dx}f_1(x)+\frac{d}{dx}f_2(x)+\cdots+\frac{d}{dx}f_k(x)$$ Now consider your function $f(x)=a_nx^n+a_{n-1}x^{n-1}+\cdots+a_1x+a_0$. Differentiating and applying linearity, we obtain $$\frac{d}{dx}f(x)=\frac{d}{dx}a_nx^n+\frac{d}{dx}a_{n-1}x^{n-1}+\cdots+\frac{d}{dx}a_1x+\frac{d}{dx}a_0$$ The constant term at the end becomes $0$, since the derivative of a constant is always $0$. The second to last term, $\frac{d}{dx}a_1x$ is simply $a_1$, since the derivative of a line is always its slope. As for the other terms, we simply apply the power rule. After completing the first round of differentiation, we should have something like this: $$f^1(x)=a_nnx^{n-1}+a_{n-1}x^{n-2}+\cdots+a_2x+a_1$$ If we keep repeating the differentiation process, you will see that there is always a constant term $a_m$ at the end which just becomes $0$. Since $f(x)$ has $n+1$ terms (each $a_ix^i$ from $i=0$ up to $i=n$), after $n$ rounds of differentiation, you should have eliminated all the terms except for the very first one: $a_nx^n$. Applying the power rule to just that term, we get: \begin{align} \frac{d}{dx}a_nx_n &=a_nnx^{n-1}\\ \frac{d}{dx}a_nnx^{n-1} & = a_nn(n-1)x^{n-2}\\ \frac{d}{dx}a_nn(n-1)x^{n-2}& =a_nn(n-1)(n-2)x^{n-3} \\ &\vdots \\ \frac{d}{dx}a_nn(n-1)(n-2)\cdots(3)x^2& =a_nn(n-1)(n-2)\cdots(3)(2)x \\ \frac{d}{dx}a_nn(n-1)(n-2)\cdots(3)(2)x&=a_nn(n-1)(n-2)\cdots(3)(2)(1) \\ & = a_n\cdot n! \end{align}