Knowing the set of orthogonal pairs of vectors fixes an inner product up to a constant positive factor. It is clear that such a factor doesn't change the concept of orthogonality. Conversely, we can prove that two inner products with the same set of orthogonal pairs must be related through a constant factor:
Given two inner products $\langle -,-\rangle$ and $[-,-]$ such that they agree about which vectors are orthogonal. Let $v_1,\ldots,v_n$ be an orthonormal basis with respect to $\langle -,-\rangle$. By assumption $[-,-]$ agrees that the $v_i$s are mutually orthogonal, so knowing the value of $[v_i,v_i]$ for each $i$ will fix all of $[-,-]$ by linearity.
Now for $i\ne j$ we have
$$\langle v_i+v_j, v_i-v_j\rangle = \langle v_i,v_i\rangle - \langle v_j,v_j\rangle = 1 - 1 = 0 $$
and therefore it must also hold that $[v_i,v_i]-[v_j,v_j] = [v_i+v_j, v_i-v_j] = 0$. Since $i$ and $j$ were arbitrary, all the $[v_i,v_i]$s must be the same, so $[v,w]=a\langle v,w\rangle$ for all $v$, $w$, where $a=[v_1,v_1]$.
If the vector space is infinite-dimensional (such that there is not necessarily any orthogonal Hamel basis for all of it), this argument can be repeated for each finite-dimensional subspace to reach the same conclusion.
As for the second question, the Gram--Schmidt process shows that every inner product on $\mathbb R^n$ is the same, modulo changes of basis. But in situations where we have a preferred basis imposed on us externally it certainly makes sense to consider different inner products. The most intuitively vivid examples come from differential geometry where "custom" inner products are used to connect arbitrary not-necessarily-rectilinear coordinate systems with geometric reality. For example, consider geographic coordinates on the Earth. If we have two lines on the map (of a not too large area) given by coordinates in degrees, then it makes sense to ask for the angle between them. We can get that using an inner product -- but because a degree of longitude is shorter than a degree of latitude (except at the equator) this needs to be a non-standard inner product in order to give geometrically meaningful results.
You can pick a basis and define the inner product by specifying what it does to a basis.
But this is somewhat unsatisfactory. In practice, how you go about writing down a meaningful inner product on $V$ depends on how $V$ itself is constructed. For example, if $V$ is, say, a space of real-valued functions on some measure space $(X, \mu)$, then a natural inner product to write down is
$$\langle f, g \rangle = \int_X f(x) g(x) \, d \mu$$
provided that this integral always converges.
Best Answer
In fact, the orthogonality relation specifies an inner product up to a positive scalar multiple. To see this, suppose $\langle \cdot, \cdot \rangle$ and $[ \cdot, \cdot ]$ are two inner products on a space $V$ with the same orthogonality relation. Note that if $V = \lbrace 0 \rbrace$, then the two inner products must be the same, so otherwise $V$ contains non-zero vectors.
Let $w \in V$ be non-zero, and define $k_w = \frac{\langle w, w \rangle}{[w, w]} \in (0, \infty)$. Clearly, if $v = \lambda w$ for scalar $\lambda$, then $\langle v, w \rangle = k_w [v, w]$. Otherwise, if $v$ is independent from $w$, then $$v - \frac{[v, w]}{[w, w]}w \perp w$$ under both inner products, in particular, $$0 = \left\langle v - \frac{[v, w]}{[w, w]}w, w \right\rangle \implies \langle v, w \rangle = \frac{\langle w, w \rangle}{[w, w]} [v, w] = k_w [v, w].$$ By a corresponding argument, we can also formulate $k_v \in (0, \infty)$ instead, and show $$\langle w, v \rangle = k_v [w, v],$$ which implies that $k_v = k_w$. This holds for linearly independent $v, w$ as well as linearly dependent $v, w$ as shown above. So $k_v$ and $k_w$ don't depend on $v$ and $w$ at all, and is in fact a constant. Thus, some $k \in (0, \infty)$ exists such that $$\langle w, v \rangle = k [w, v],$$ as required.