Let $\mathcal{Q} = \mathcal{Q}(V)$ be the set of quadratic forms over the vector space $V$, and let $\mathcal{B} = \mathcal{B}(V)$ be the set of symmetric bilinear forms over the same space.
Assuming that $2$ is invertible over your ground field $\mathbb{k}$, i.e. $\operatorname{char} \mathbb{k} \neq 2$, the map $\alpha: \mathcal{Q} \to \mathcal{B}$, from a quadratic form $q$ yields a symmetric bilinear form $\alpha(q)$ defined by
$$
\alpha(q)(v, w) = \tfrac12 \bigl( q(v+w)-q(v)-q(w) \bigr).
$$
Check that it's symmetric and bilinear! In the other direction, define
$\beta: \mathcal{B} \to \mathcal{Q}$, takes a symmetric bilinear form $b$ and produces the quadratic form $\beta(b)$, defined by
$$
\beta(b)(v) = b(v,v).
$$
Check that it's a quadratic form.
Now you have to verify that these are mutually inverse, i.e. that
$\beta(\alpha(q)) = q$ for any $q \in \mathcal{Q}$ (shows that $\alpha$ is injective and $\beta$ is surjective) and that
$\alpha(\beta(b)) = b$ for any $b \in \mathcal{b}$ (shows that $\alpha$ is surjective and $\beta$ is injective).
Without positivity, this is not even true for $\mathbb F=\mathbb R$. See this wonderful post.
We show some more general results. It has been shown by P. Quinton that $(\cdot, \cdot):=\beta_{\varphi}$ is bi-additive. In particular it follows $$(-u, v)=(u,-v)=-(u,v)$$ Consider $$\begin{cases} (\lambda(u+v), \lambda(u+v)=\lambda^2(u+v, u+v) & (1)\\ (\lambda(u-v), \lambda(u-v))=\lambda^2(u-v,u-v) & (2)\end{cases}$$
Consider $(1)-(2)$, and after additive expansion, we get $4(\lambda u, \lambda v) =4 \lambda^2(u, v)$ that is $$(\lambda u, \lambda v)=\lambda^2(u, v)$$
where $u, v$ are not necessarily equal.
Now we show $(\lambda u, u)=\lambda (u, u)$. This follows from $$((1+\lambda)u, (1+\lambda)u)=(1+\lambda)^2(u, u)$$
After the expansion and cancellation, $2(\lambda u, u)=2\lambda (u, u)$.
Now we can show that in the very special case $\dim V = 1$, $\beta_{\varphi}$ is bilinear. And since the post has constructed examples for $\dim V=2$, we know the statement also fails for all $\dim V\ge 2$.
Let $V=\text{Span}\{e\}$, then assume $a\not=0$, $$(ae,be)=a^2(e, \frac{b}{a}e)=a^2(\frac{b}{a}e, e)=a^2\frac{b}{a}(e,e)=ab(e,e)$$ In other words, $(\cdot, \cdot)$ is completely determined by $(e,e)$ alone.
Best Answer
There are two completely different and largely unrelated phenomena at play here.
1) It is well-known that if $V$ is a finite-dimensional vector space, then $V$ and $V^*$ have the same dimension, so they are isomorphic as vector spaces. But it is also well-known that they are not canonically isomorphic. One way to construct an isomorphism is to choose a basis $(e_i)$ of $V$, then consider the dual basis $(e_i^*)$ of $V^*$, and define $V\to V^*$ by sending $e_i$ to $e_i^*$. This is of course strongly dependent on the choice of the basis $e_i$. In general, an isomorphism $V\to V^*$ is the same as a nondegenerated bilinear form on $V$. Usually, we like it to be either symmetric (which defines a quadratic form) or anti-symmetric (say in characteristic not $2$). The dual basis trick corresponds to choosing a quadratic form that can be diagonalized as $\langle 1,\dots ,1\rangle$ (and the basis $(e_i)$ corresponds to the diagonalization basis).
2) For any vector space, of any dimension, over any field, if $U\subset V$ is a subspace, then the canonical restriction map $V^*\to U^*$ is surjective (at least if we allow an appropriate axiom of choice, if the dimension is infinite). This is for instance because we can take a supplementary subspace $V=U\oplus W$, and extend any linear map $U\to k$ to $V$ by stating that it is $0$ on $W$. A more fancy answer is to say that it is because all vector spaces are injective modules.
The huge difference between those two observations is that the first one relates spaces with their dual (it is a "mixed" map if you want) while the second one deals purely with dual spaces (by comparison we could say it is "homogeneous").