What you've stumbled upon is called the "Gibbs paradox", and the resolution is to divide the phase space for entropy calculations in statistical mechanics by the identical particle factor, which reduces the number of configurations.
Since the temperature is unchanged in the process, the momentum distribution of the atoms is unimportant, it is the same before and after, and the entropy is entirely spatial, as you realized. The volume of configuration space for the left part is:
${V_1^N \over N!}$
and for the right part is:
${V_2^N\over N!}$
And the total volume of the 2N particle configuration space is:
$(V_1V_2)^N\over (N!)^2$
When you lift the barrier, you get the spatial volume of configuration space
$(V_1 + V_2)^{2N} \over (2N)!$
When $V_1$ and $V_2$ are equal, you naively would expect zero entropy gain. But you do gain a tiny little bit of entropy by removing the wall. Before you removed the wall, the number of particles on the left and on the right were exactly equal, now they can fluctuate a little bit. But this is a negligible amount of extra entropy in the thermodynamic limit, as you can see:
${(2V)^{2N}\over (2N)!} = {2^{2N}(N!)^2\over (2N)!}{V^{2N}\over (N!)^2}$
So that the extra entropy from lifting the barrier is equal to:
$ \log ({(2N)!\over 2^{2N}(N!)^2})$
You might recognize the thing inside the log, it's the probability that a symmetric +/-1 random walk returns to the origin after N steps, i.e. the biggest term of the Pascal triangle at stage 2N when normalized by the sum of all the terms of Pascal's triangle at that stage. From the Brownian motion identity or equivalently, directly from Stirling's formula), you can estimate its size as ${1\over \sqrt{2\pi N}}$, so that the logarithm goes as log(N), it is sub-extensive, and vanishes for large numbers.
The entropy change in the general case is then exactly given by the logarithm of the ratio of the two configuration space volumes before and after:
$e^{\Delta S} = { V_1^N V_2^N \over (N!)^2 } { (2N)! \over (V_1 + V_2)^{2N}} = { V_1^N V_2^N \over ({V_1 + V_2 \over 2})^{2N}} {(2N)!\over 2^{2N}(N!)^2}$
Ignoring the thermodynamically negligible last factor, the macroscopic change in entropy, the part proprtional to N, is:
$\Delta S = N\log({4 V_1 V_2 \over (V_1 + V_2)^2})$
up to a sign, it is as you calculated.
Additional comments
You might think that it is weird to gain a little bit of entropy just from the fact that before you lift the wall you knew that the particle numbers were exactly N, even if that entropy is subextensive. Wouldn't that mean that when you lower the wall, you reduce the entropy a tiny subextensive amount, by preventing mixing of the right and left half? Even if the entropy decrease is tiny, it still violates the second law.
There is no entropy decrease, because when you lower the barrier, you don't know how many molecules are on the left and how many are on the right. If you add the entropy of ignorance to the entropy of the lowered wall system, it exactly removes the subextensive entropy loss. If you try to find out how many molecules are on the right vs how many are on the left, you produce more entropy in the process of learning the answer than you gain from the knowledge.
Best Answer
After the 1st hour, the first train would have moved 45 km, the second train will just start moving. Therefore, the distance between the two trains is 45 km.
In the next hour, the second train moves 60km, the first train moves only 45 km, so the second train would have caught up by 15 km. The distance between the two trains is now 30km.
Repeat this for another 2 hours and the second train will have caught up with the first train.
The best way to see this will be to draw out the distance travelled by each train per hour, or the sketch the displacement-time graph for each train. The idea of 'relative velocity' is a more complicated way to reason the above situation, although it can be very useful in certain problems.