There are two competing ideas here, that of induction, and that of asymptotics. Normally, induction requires a base case, but because of the way asymptotics work, the base case will be trivially satisfied, because we can always take $c$ to be large enough to outweigh what happens in the early cases. The fact that they did any base cases at all is purely pedagogical, so as to make things look like induction, but as we shall see the base case is not strictly necessary.
We can phrase what we are trying to prove in terms of induction as follows.
The recurrence $T(n)$ is $O(f(n))$ if there exists constants $c$ and $n_0$ such that $T(n+n_0)<cf(n+n_0)$ for every n>0$.
Here we have phrased things in terms of $n+n_0$ just so that the induction can start at $1$, but there is no harm in replacing $n$ with $n_0$ and starting the induction with $1+n_0$.
Assume $f(n)>0$ for all $n>k$. Since we don't care what $c$ is, if all we are concerned about is making the asymptotic statement true, we can take $n_0$ to be $k$, because we can push all the rest of the slack into our choice of the unspecified constant $c$. It might take many terms before $f$ starts growing as fast as $T$, but $c$ can take care of any finite number of terms before this happens.
For example. if $T(n)=10^{6n}$ for $n<10^6$ and $0$ afterwards, $T(n)$ is $O(1)$. We start with $n_0=1$ and we take $c=10^{10^{10^{10}}}$. Or bigger if we want. We take $c$ to be whatever it needs to be so that the base case is immediately satisfied without any checking. Note that we could take $c=10^6$ if we only cared about the base case, but a larger constant was required to make the statement true despite our poor choice of $n_0$.
Of course, we might still need $n_0$ to be large, because the growth of the sequence might depend on $n$ in such a way that it behaves differently for small $n$, and our choice of $c$ can only take care of the base case. We need $n_0$ to make sure that the induction step still holds. In the above example, if we want induction to hold, we want to take $n_0>10^6$.
What you are being asked for is an equation that has $x_n$ on the left side and a formula in $n$ on the right side containing just familiar functions like polynomials and exponentials and only finitely many of them. Whoever asked you to solve the problem probably also provided a method for solving such problems, but here goes:
First, let's write your problem in a better format: $x_0=4$, $x_1=23$, $x_n=11x_{n-1}-30x_{n-2}$.
Now, suppose there is a solution of the form $x_n=c^n$ for some $c$. (Why do we make such a supposition? Because we've been here before, and we know it will work.) Then the equation says $c^n=11c^{n-1}-30c^{n-2}$, which simplifies to $c^2-11c+30=0$, which undoubtedly you are able to solve. You'll get two values of $c$ that work, let's call them $c_1$ and $c_2$, and then anything of the form $Ac_1^n+Bc_2^n$ will work also, for any numbers $A$ and $B$. If you're clever in choosing $A$ and $B$, you'll get $x_0=4$ and $x_1=23$.
Best Answer
Hidden underneath the "annihilator" concept in linear recurrences is the notion of power series and generator functions.
If you have a sequence $\{a_n\}$ you can define the power series:
$$a(z) = \sum_{n=0}^\infty a_n z^n$$
The abstract algebra part is that we are dealing here with a "formal power series" - we don't need to have it converge for any $z$. Given two power series, $a(z)$ and $b(z)$, we can multiply them to get a new power series, for example. We can add power series.
Now given your $a_n=f(n)$, we can see that:
$$(1-4z)a(z) = a_0 + \sum_{n=1}^\infty (a_n-4a_{n-1})z^n = a_0$$
We then define $E(a(z)) = \frac{a(z)-a(0)}{z}$. This takes a power series and shifts it one to the left. Thn we see that:
$$\frac{a(z)-a(0)}{z} - 4a(z) = 0$$
Thus we get $(E-4)(a(z)) = 0$.
Working backwards, we also see that $a(z)=\frac{a_0}{1-4z}=a(z)$. But we know that we can expand $\frac{1}{1-4z} =\sum_{n} 4^nz^n$. So we get that $a_n = 4^na_0 z^n$.
That might look like I've played a little slight of hand, since how can I show $\frac{1}{1-4z} = \sum_{n} 4^nz^n$ without worrying about convergence? That's the magic of power series - they form a commutative ring, and we can work out multiplicative inverses for (some) elements in this ring. Two power series are considered equal if they have the same coefficients, and there are no zero divisors, so it just magically falls out.
More generally, if $R(E)$ is some polynomial of $E$ and $f$ satisfies the condition $R(E)f = 0$ then when we define $f(z)=\sum f(n)z^n$ we get a rather complicated expression but which ultimately boils down to some $q(z)f(z) = p(z)$ where $q(z)$ and $p(z)$ are always (finite) polynomials, not power series. So we get $f(z)=\dfrac{p(z)}{q(z)}$. We can expand that by representing $p(z)/q(z)$ in terms of partial fractions.
Note, if $a(z)=\sum a_nz^n$ then $$(E^k)(a(z)) =\frac{a(z)-a_0-a_1z-\dots a_{k-1}z^{k-1}}{z^k}$$