Let's assume everything has finite dimension here.
1) I don't know exactly to what property $(f\otimes g)(...) = f(...)g(...)$ refer to exactly in the second definition of tensor. For example if we have $f$ to be a (0,2) tensor and $g$ a (0,2) tensor. the product would be a (0,4) tensor so $(f\otimes g)(v_1,v_2,v_3,v_4) = f(v_1,v_2)g(v_3,v_4)$. To what kind of tensor in definition 2 would this be isomorphic ?
A tensor product of two vector spaces $V$ and $W$ is a pair $(\mathsf{T} ,t)$, where $\mathsf{T} $ is a vector space and $t\colon V \times W \to \mathsf{T} $ is bilinear, such that if $\{{\bf v}_i\}$ and $\{{\bf w}_j\}$ are bases for $V$ and $W$, then $\{t({\bf v}_i,{\bf w}_j)\}$ spans $\mathsf{T} $, and if given $b\colon V \times W \to Z$ is any bilinear map (arriving at an arbitrary vector space $Z$), there is an unique linear map $\overline{b}\colon \mathsf{T} \to Z$ such that $\overline{b}\circ t = b$. Meaning that bilinear maps $b$ factor through $\mathsf{T} $ and we have all the information needed in a single linear map $\overline{b}$. One then proves that all tensor products of $V$ and $W$ are isomorphic, and so we put the usual notations $\mathsf{T} \equiv V \otimes W$, $t \equiv \otimes$, and write ${\bf v} \otimes {\bf w}$ for $t({\bf v},{\bf w})$.
An explicit construction is to take the free vector space with basis $V \times W$, and take its quotient by the subspace spanned by the elements of the form \begin{align} &({\bf v}_1+{\bf v}_2,{\bf w}) - ({\bf v}_1,{\bf w})-({\bf v}_2,{\bf w}), \\ & ({\bf v},{\bf w}_1+{\bf w}_2)-({\bf v},{\bf w}_1)-({\bf v},{\bf w}_2), \\ & (\lambda{\bf v},{\bf w}) - ({\bf v},\lambda{\bf w}).\end{align}
We then denote the class of $({\bf v},{\bf w})$ by ${\bf v} \otimes {\bf w}$.
One generalizes all of this by considering more spaces, writing "multilinear maps" instead of "bilinear maps", and so on. The space $${\frak T}^{(r,s)}(V) = \{ f\colon (V^\ast)^r \times V^s \to \Bbb R \mid f \text{ is multilinear} \}$$is isomorphic to $V^{\otimes r}\otimes (V^\ast)^{\otimes s}$, and that isomorphism does not depend on a choice of basis (so it is better than your average run-of-the-mill isomorphism). Well, being more honest, we use a basis to define the isomorphism, but then we check that it would be same if we started with another basis. We say that $T \in {\frak T}^{(r,s)}(V)$ is a $r-$times contravariant and $s-$times covariant tensor. We of course have an operation $$\otimes \colon {\frak T}^{(r,s)}(V) \times {\frak T}^{(r',s')}(V) \to {\frak T}^{(r+r',s+s')}(V).$$
Now we hopefully understand a little better what a tensor product is, we can simply note that ${\frak T}^{(0,4)}(V) \cong (V^\ast)^{\otimes 4}$, and if $\{{\bf v}_i\}$ is a basis for $V$, and $\{{\bf v}^i\}$ is the dual basis, then $$f \otimes g = \sum_{i,j,k,\ell} f_{ijk\ell} {\bf v}^i \otimes {\bf v}^j\otimes {\bf v}^k \otimes {\bf v}^\ell,$$where $f_{ijkl} = f({\bf v}_i,{\bf v}_j,{\bf v}_k,{\bf v}_\ell)$ and ${\bf v}^i \otimes {\bf v}^j\otimes {\bf v}^k \otimes {\bf v}^\ell \in {\frak T}^{(0,4)}(V)$. It corresponds to that same expression seen as a linear combination of ${\bf v}^i \otimes {\bf v}^j \otimes {\bf v}^k \otimes {\bf v}^\ell$, the class of $({\bf v}^i,{\bf v}^j,{\bf v}^k,{\bf v}^\ell)$ in that quotient - that expression is an element of $(V^\ast)^{\otimes 4}$. Maybe I shouldn't have been lazy and used another notation for the classes until now - I'll gladly explain it all again if you have trouble following.
2) I am confused about what is a tensor component. As I understand tensor components are the scalars that form a linear combination of basis tensors. In definition 1, I see books defining the tensor components as $T^{a_1,...,a_k}_{b_1,...,b_k} = T(a_1,...,a_k,b_1,...,b_k)$. Where $a_i$ and $\{b_i\}$ are a basis of the vector space and covector space. For definition 2 I see tensor component written as :
\begin{align}
\sum\sum A_{ij}a_i\otimes b_j
\end{align}
How do these components relate ?
Components do depend on a choice of basis. The choice of notation used in your textbook was bad, we only keep indices on $T$, not the vectors. I mean one would write $$T^{i_1...i_r}_{\qquad j_1...j_s} \stackrel{\rm def.}{=} T({\bf v}^{i_1},...,{\bf v}^{i_r},{\bf v}_{j_1},...,{\bf v}_{j_s})$$instead. And with this notation, we'd have $$T = \sum_{i_1,...,i_r,j_1,...,j_s} T^{i_1...i_r}_{\qquad j_1...j_s} {\bf v}_{i_1}\otimes \cdots \otimes {\bf v}_{i_r}\otimes {\bf v}^{j_1}\otimes \cdots \otimes {\bf v}^{j_s}.$$With Einstein's summation convention, we'd only write $$T = T^{i_1...i_r}_{\qquad j_1...j_s} {\bf v}_{i_1}\otimes \cdots \otimes {\bf v}_{i_r}\otimes {\bf v}^{j_1}\otimes \cdots \otimes {\bf v}^{j_s}$$
with all the summations implied (and that's why index balance is good - if the same index appears twice, up and below, sum over it).
When you ask how these things relate, the reasonable thing to consider is another basis $\{{{\bf v}_i}'\}$, the corresponding dual basis $\{{{\bf v}^i}'\}$, write $${T^{i_1...i_r}_{\qquad j_1...j_s}}' = T({{\bf v}^{i_1}}',...,{{\bf v}^{i_r}}',{{\bf v}_{j_1}}',...,{{\bf v}_{j_s}}')$$and see how we can express this in terms of the "old" components $T^{i_1...i_r}_{\qquad j_1...j_s}$.
If you're still alive after all that index juggling, you'll certainly pardon me by ilustrating the relation only in the $(1,1)$ case. Write ${{\bf v}_j}' = \sum_i \alpha_{ij} v_i$. It is an easy linear algebra exercise to check that ${{\bf v}^j}' = \sum_i \beta_{ij} {\bf v}^i$, where $(\beta_{ij})$ is the inverse matrix of $(\alpha_{ij})$. Then $${T^i_{\hspace{1ex} j}}'=T({{\bf v}^i}',{{\bf v}_j}') = \sum_{k, \ell }\beta_{ki}\alpha_{\ell j}T({{\bf v}^k}',{{\bf v}^\ell}') = \sum_{k,\ell} \beta_{ki}\alpha_{\ell j}T^k_{\hspace{1ex}\ell}.$$
If we had more entries, then more $\alpha$'s and $\beta$'s would pop out. You can see that in any physics book, for instance. I like A Short Course in General Relativity, by Foster & Nightingale. By the way, it is costumary to denote the entries of the inverse matrix by writing indices upstairs. In this notation, and using Einstein's convention, we'd have simply $${T^i_{\hspace{1ex} j}}' = \alpha^{ki}\alpha_{\ell j}T^k_{\hspace{1ex} \ell}.$$
3) I was trying to work out a basic example of an endomorphism $\mathbb{R}^2 \longrightarrow \mathbb{R}^2$ by using the two definitions but couldn't end up with the same set of components...
When we write components in a basis, they're real numbers, and our discussion does not quite apply if the codomain of the bilinear map isn't $\Bbb R$.
Given an element $\alpha \in V \otimes W$, we know it can be written as $\sum_{i=1}^k v_i \otimes w_i$ for some $v_i \in V, w_i \in W$ and $n \in \mathbb{N}$. Consider such a presentation of $\alpha$ in which $k$ is minimal. Let us show that in such a presentation, the vectors $v_1,\dots,v_k$ must be linearly independent and similarly the vectors $w_1,\dots,w_k$ must be linearly independent.
If $v_1,\dots,v_k$ are linearly dependent then without loss of generality (by reordering the $(v_i,w_i)$) we can assume that $v_1 = \sum_{i=2}^k a_i v_i$ for some $a_2,\dots,a_k$. But then
$$ \alpha = \sum_{i=1}^k v_i \otimes w_i = v_1 \otimes w_1 + \sum_{i=2}^k v_i \otimes w_i = \left( \sum_{i=2}^k a_i v_i \right) \otimes w_1 + \sum_{i=2}^k v_i \otimes w_i \\
= \sum_{i=2}^k a_i (v_i \otimes w_1) + \sum_{i=2}^k v_i \otimes w_i = \sum_{i=2}^k v_i \otimes (a_i w_1) + \sum_{i=2}^k v_i \otimes w_i = \sum_{i=2}^k v_i \otimes (w_i + a_i w_1)$$
contradicting the minimality of $k$. The argument for the independent of the $(w_i)_{i=1}^k$ is exactly the same.
Best Answer
Thanks for the hint in comments. It's more clear now what the answer would be:
Applied to the case in the QUESTION, the change of basis matrix is $\small\begin{bmatrix}3&4&-1\\0&3&7\\1&3&0.5\end{bmatrix}$, and its inverse $\small\begin{bmatrix}0.7&0.2&-1.1\\-0.3&-0.1&0.8\\0.1&0.2&-0.3\end{bmatrix}$. The vectors $v$ and $w$ in the new coordinate system are
$v =\small\begin{bmatrix}0.7&0.2&-1.1\\-0.3&-0.1&0.8\\0.1&0.2&-0.3\end{bmatrix}\begin{bmatrix}1\\2\\3\end{bmatrix} =\begin{bmatrix}-2.3\\1.9\\-0.5\end{bmatrix}$ and $w=\small\begin{bmatrix}0.7&0.2&-1.1\\-0.3&-0.1&0.8\\0.1&0.2&-0.3\end{bmatrix}\begin{bmatrix}1\\0\\0\end{bmatrix}=\begin{bmatrix}0.7\\-0.3\\0.1\end{bmatrix}$.
Therefore,
$$\begin{align}\large v\otimes w=\left(-.23\tilde x + 1.9\tilde y -0.5 \tilde z\right)\otimes \left(0.7\tilde x -0.3\tilde y + 0.1\tilde z\right)\\[2ex]=-1.6\;\tilde x\otimes \tilde x + 1.3\;\tilde x\otimes \tilde y -0.3 \;\tilde x\otimes \tilde z + 0.6\;\tilde y\otimes \tilde x -0.5\;\tilde y\otimes \tilde y+ 0.1\;\tilde y\otimes \tilde z -0.3\;\tilde z\otimes \tilde x +0.2 \;\tilde z\otimes \tilde y-0.1\;\tilde z\otimes \tilde z\end{align}$$
So what's the point?
Starting off defining the tensor product of two vector spaces ($V\otimes W$) with the same bases, we end up calculating the outer product of two vectors:
$$\large v\otimes_o w=\small \begin{bmatrix}-2.3\\1.9\\-0.5\end{bmatrix}\begin{bmatrix}0.7&-0.3&0.1\end{bmatrix}=\begin{bmatrix}-1.61&0.69&-0.23\\1.33&-0.57&0.19\\-0.35&0.15&-0.05\end{bmatrix}$$
This connect this post to this more general question.