In some degenerate cases there may be no such a one point (for instance, if all the lines are parallel). However there's a single solution in the general case.
I assume you're trying to solve a more general problem where all the lines are not required to intersect exactly (otherwise there's a much simpler solution than the least squares).
Derivation:
You say the every line is represented by two points. Let's rather work in the convention where a line is represented by one point and a direction vector, which is just a vector subtraction of those two points. That is, instead of describing a line by points $\mathbf{a}$ and $\mathbf{b}$ we'll describe it by a point $\mathbf{a}$ and a vector $\mathbf{d}$ whereas $\mathbf{d}=\mathbf{b}-\mathbf{a}$.
Our point (which we're trying to find) is $\mathbf{c}$.
The distance of this point to the line is:
$H=\frac{\|(\mathbf{c}-\mathbf{a})\times\mathbf{d}\|}{\|\mathbf{d}\|}$
Using identity $(\mathbf{a}\times\mathbf{b})\cdot(\mathbf{a}\times\mathbf{b})=\|\mathbf{a}\|^2\|\mathbf{b}\|^2-(\mathbf{a}\cdot\mathbf{b})^2$
we have:
$H^2=\frac{\|\mathbf{c}-\mathbf{a}\|^2\|\mathbf{d}\|^2-\|(\mathbf{c}-\mathbf{a})\cdot\mathbf{d}\|^2 }{\|\mathbf{d}\|^2}$
$H^2 = \|\mathbf{c}-\mathbf{a}\|^2-\frac{\|(\mathbf{c}-\mathbf{a})\cdot\mathbf{d}\|^2 }{\|\mathbf{d}\|^2}$
The square sum of the distances of the point $\mathbf{c}$ to all the lines is just the sum of the above expressions for all the lines. The problem is to minimize this sum. This sum depends on a variable $\mathbf{c}$ (which is actually 3 variables, the components of $\mathbf{c}$). This is a standard least squares problem, which generally has a single solution (unless there's a degeneracy).
Solving the least squares for this specific case.
Since we want find such a $\mathbf{c}$ that minimizes this sum, its derivative with regard to $\mathbf{c}$ should be zero. In other words:
$\frac{d(H^2)}{d\mathbf{c}}=2(\mathbf{c}-\mathbf{a})-2\mathbf{d}\frac{(\mathbf{c}-\mathbf{a})\cdot\mathbf{d}}{\|\mathbf{d}\|^2}$
$0=\sum_{i=0}^m{\mathbf{c}-\mathbf{a}^{(i)}-\mathbf{d}^{(i)}\frac{(\mathbf{c}-\mathbf{a}^{(i)})\cdot\mathbf{d}^{(i)}}{\|\mathbf{d}^{(i)}\|^2}}$
This gives 3 equations (since it's a vector equation) with 3 unknowns (components of $\mathbf{c}$).
Since you want a point from the given points (based on your comments to Ross' answer) here is an $O(n \log n)$ time algorithm, based on the idea of rotating by $45^{\circ}$ and using the equivalence with Manhattan Distance. This only works for the 2D plane.
Assume the points are $(p_n, q_n)$. You transform them to $(q_n - p_n, q_n + p_n) = (x_n , y_n)$ say.
Now we will see how to compute the $x$ component of the Manhattan distance for each point in $O(n \log n)$ time.
We will assume that $x_1 \lt x_2 \lt ...\lt x_n$. This can be done by sorting them. Also, the algorithm can be modified to deal with the case when they are not distinct. For simplicity, we assume they are unique.
Define $L_m = \sum_{j=1}^{m} x_j$ and $R_m = \sum_{j=m}^{n} x_j$. Note that $R_m = L_n - L_{m-1}$. (Define $L_{0} = 0$)
Note that the $L_i$ and $R_i$ can be computed in $O(n)$ time, by looping through the array and maintaining the cumulative sum.
Now for a given $x_k$ we have that $\sum_{j=1}^{n} |x_k - x_j| = \sum_{j=1}^{k-1} (x_k - x_j) + \sum_{j=k+1}^{n} (x_j - x_k) = R_{k+1} - L_{k-1} - (n-2k+1)x_k$
(This is where we used the distinctness assumption, but can be modified easily to deal with it).
Similarly do with $y_k$.
Thus we can compute the sum of (manhattan) distances for each point to other points in total $O(n)$ time.
Finding the minimum among them is another $O(n)$ (or just keep track while computing the distances).
Best Answer
I should apologize for my earlier rude and incorrect comment.
Here is a rather tedious proof that the median is a minimizer of $\min_x f(x)$ ($f$ defined below):
I need some notation: Let the set of points by $d_1,...,d_n$, let $\Omega = \{d_k\}_k$ (some points may be repeated) and define $f(x) = \sum_k |x-d_k|$. If $R$ is a relation on $\mathbb{R}$, let $I_R(x) = \{ k \,|\, x R d_k \}$ (eg, $I_=(x)$ is the set of indices such that $d_k = x$).
We note that $f$ is convex, and we have $f(x) = \sum_{k \in I_<(x)}d_k-x + \sum_{k \in I_>(x)}x-d_k$. Since $f$ is continuous, we see that $f$ is piecewise linear (affine), and for $x \notin \Omega$, $f'(x) = |I_>(x)|-|I_<(x)|$. Furthermore, since $f$ is convex, $f'(x)$ is non-decreasing on $\Omega^C$.
For $x < \min_k d_k$, we have $f'(x) = -n$, and for $x > \max_k d_k$, we have $f'(x) = +n$. Hence the problem $\min_x f(x)$ has a minimum on $[\min_k d_k, \max_k d_k]$.
Let $\phi_1,...,\phi_m$ be the non-decreasing sequence of values that $f'$ takes on $\Omega^C$, and let $J_1,...,J_m$ be the corresponding open intervals. (We have $\phi_1 = -n, \phi_m = +n$, of course.) There are two case to consider:
Case (1): For some $k$, we have $\phi_k = 0$, and for $x \in J_k$, $|I_>(x)|=|I_<(x)|$. It follows that $n$ is even, and since the median lies in $J_k$, we see that the median is a minimizer (in fact, $x$ is a minimizer iff $x \in \overline{J_k}$).
Case (2): For some $k$, we have $\phi_k <0, \phi_{k+1} >0 $. Let $\hat{x}$ be the point separating $J_k, J_{k+1}$ (hence $\hat{x} \in \Omega$). Then $\hat{x}$ is the minimizer of $f$. For $x \in J_k$, we have $f'(x) = |I_>(\hat{x})|-|I_=(\hat{x})|-|I_<(\hat{x})|$, and for $x \in J_{k+1}$, we have $f'(x) = |I_>(\hat{x})|+|I_=(\hat{x})|-|I_<(\hat{x})|$, or, in other words, $$|I_=(\hat{x})| > \left| |I_>(\hat{x})|-|I_<(\hat{x})| \right| $$ To see why the median is $\hat{x}$, let $l=|I_>(\hat{x})|, p=|I_<(\hat{x})|$ and $e=|I_=(\hat{x})|$ ($l+e+p = n$, of course). The above shows that $e > |p-l|$. If $l+e+p$ is even, then let $k=\frac{1}{2}(e+p-l)>0$. Then we have $l+k = p+e-k$, from which it follows that the $l+k$th and $l+k+1$st (ordered) points are equal to $\hat{x}$, and hence $\hat{x}$ is the median.If $l+e+p$ is odd, then let $k=\frac{1}{2}(e+p-l-1)\geq 0$. Then the $l+k+1$st (ordered) point is equal to $\hat{x}$, and hence the median is equal to $\hat{x}$.
Alternative:
One may use the convex subdifferential to obtain the same characterization. A quick computation shows that $\partial f(x) = -\{|I_<(\hat{x})|\}+\{|I_>(\hat{x})|\}+[-|I_=(\hat{x})|,+|I_=(\hat{x})|]$, and $x$ is a minimizer iff $0 \in \partial f(x)$, or equivalently, $x$ is a minimizer iff $|I_=(\hat{x})| \geq \left| |I_>(\hat{x})|-|I_<(\hat{x})| \right| $. Then a little work (essentially the same as the cases above) shows that the median satisfies the latter characterization.