When lenses are modeled (with the geometric optics approximations) what is usually shown is a single, mathematical, point source emitting equally in all directions (or in all forward directions), and the lens then transforms that point source to another point behind the lens.
When the rays are coming in parallel, as shown above (courtesy of Wikipedia), you can think of that as putting the point source before the lens essentially at a distance far enough that it appears to be at $\infty$, like a star.
But, rays don't have to come from $\infty$, most cameras have a focal range which you can change to do what is called imaging a finite conjugate shown below (courtesy of Wikipedia) for a single point source:
This is what image formation is, transforming some emitting point source out in front of the lens to another point at the focal plane. If the detector is displaced from the focal plane, you can look at the rays in those pictures and see that they don't intersect. In practice, this results in defocus blur, where a point is imaged to some larger "blob." It is essentially why out of focus objects appear blurry.
In practice a lens doesn't just image one point, as is usually shown in examples, it images an entire scene. That scene can be thought of as a three dimensional volume of many, many point sources all emitting at the same time! Then, depending on the distance to the lens from the detector, there will exist a plane for which all point sources lying in that plane will be imaged to points on the detector, and that will be the "in focus" part of the image, and all other points will be "blurred." In the picture above, the green line labeled "object" would be such a plane. Most cameras have the detector parallel to the lens, which creates focus planes essentially perpendicular to whatever direction you are pointing the camera. Tilt-Shift lenses get around this by essentially "tilting" the detector plane, and therefore the "in focus plane" in the scene.
I think this point is where your confusion lies, the examples are only showing how a lens works with a point source (like a tiny LED), but real scenes are collections of basically an infinite number of point sources.
The equation you need to get the distance of the image from the object and focal length is:
$$ \frac{1}{u} + \frac{1}{v} = \frac{1}{f} $$
where $f$ is what your question describes as the principal focus/focal point.
A bit of playing around with this should convince you that if $u = 2f$ then $v = 2f$ i.e. $u$ and $v$ are identical. If $u > 2f$ then $v < 2f$ and $u$ is greater than $v$. Conversely if $u < 2f$ then $v > 2f$ and $u$ is less than $v$. So, as your data shows, $v$ can be greater or less than $u$.
However for the image to be real $u$ must be greater than $f$. If you put in a value of $u$ less than $f$ you'll find $v$ comes out negative, which with the sign convention I've used means the image is to the left of the lens i.e. a virtual image.
Best Answer
A lens is symmetric. By this I mean you can pick it up, turn it round and put it back and the light rays don't change. Or to look at it another way, it doesn't matter whether the light travels from left to right through the lens or right to left.
So parallel light travelling from infinity on the left of the lens will converge at the focal point on the right. Likewise parallel light travelling from the right of the lens will converge at the focal point on the left.
So in this sense a lens has two focal points - one on each side.