Assuming this is a homework exercise, I think you're making it much more complex than it is.
You're being asked (implicitly) to imagine that an input size of $100$ is enough that all run-time contributions of terms that grow slower than $n\log n$ -- such as time for the program to start up, the $O(n)$ time it takes to read the input, and so forth -- are insignificant compared to the $n\log n$ term. (This is not an unproblematic assumption -- see below -- but that may not be what the question asks you to philosophize about).
In other words, suppose you can ignore everything else. Then the running time is $f(n) = Cn\log n$ for some constant $C$. Given that $f(100)=0.03$, you can find $C$ simply by plugging in $100$ to get
$$C\cdot 100\log(100) = 0.03 \quad\text{ and therefore }\quad C = \frac{0.03}{200}=0.00015$$
Now you just use that $C$ to find $f(10000)=0.00015\times 10000\times 4=6$.
But that is not the result you're expecting to get -- why would you expect $f(10000)$ to be $20$?
It's not a result of using a base-10 logarithm, because any other logarithm would just show up as a multiplicative factor in $C$ that is canceled out when we plug in concrete numbers later.
You certainly shouldn't expect $f(100)=0.03$ and $f(200)=0.06$, because that amounts only to a linear scaling of the runtime with no logarithmic factor at all. (It's possible to get this behavior with a constant term, of course, but then your problem is underspecified).
So perhaps it's not homework? Are the known good values ones you have measured in practice? In that case it is very likely that what you see is just due to measuring uncertainty -- $0.03$ seconds is a very short time to time a program, and you really have no business trying to extrapolate that by a factor of 100. How precise are your measurements even for such short intervals?
For example, a problem of size 100 will likely stay within the L1 cache whereas for a size of 10000 you begin to pay a price in cache misses that simply doesn't show up at all in the $n=100$ measurement.
Remember that big-O notation is about how the resource usage grows when $n$ is very large, and a nice big-O analysis can hide any amount of shenanigans and erratic behavior for small $n$, as long as there is some size above which the complexity behaves as the big-O notation claims. There's no guarantee about how large this threshold may be; even 10000 may be too small a problem size to see the big-O complexity in action.
Here is a more general framework for your question. Suppose you have functions $f(x)$ and $g(x)$. You compare their growth rates by looking at $$\lim_{x\to\infty}\frac{f(x)}{g(x)}$$
In your problem, $f(x)=3^x$ and $g(x)=2^x$, and this limit is $\infty$, which is what we mean when we say "$3^x$ grows faster than $2^x$."
Then you ask: why does taking the logarithm not yield the same order? (In other words, you're asking: if $f(x)$ grows faster than $g(x)$, why doesn't $\log f(x)$ grow faster than $\log g(x)$? Or, in yet other words: why don't logs preserve the asymptotic ordering?) The answer is that you must now consider $$\lim_{x\to\infty}\frac{\log f(x)}{\log g(x)}$$
And there is no nice way to relate the first limit to this limit, in general.
But in your special case when $f(x)$ and $g(x)$ are exponentials, taking the log gives first-order polynomials, which have the same growth rate: $$\lim_{x\to\infty}\frac{\log 3^x}{\log 2^x}=\lim_{x\to\infty}\frac{x\log 3}{x\log 2}=\frac{\log 3}{\log 2}$$
(Since the limit is finite and non-zero, the new functions have the same growth rate.)
EDIT
No, pure exponentials are not the only family of functions that has this property. Consider $f(x)=x\cdot 3^x$ and $g(x)=x\cdot 2^x$. These functions have different growth rates, because $$\lim_{x\to\infty}\frac{x\cdot 3^x}{x\cdot 2^x}=\infty$$
But their logs, $\log x+x\log 3$ and $\log x+x\log 2$, have the same growth rates, because
$$\lim_{x\to\infty}\frac{\log x+x\log 3}{\log x+x\log 2}=\lim_{x\to\infty}\frac{\frac{1}{x}+\log 3}{\frac{1}{x}+\log 2}=\frac{\log 3}{\log 2}$$
In fact, as this example suggests, any function that is the product of a bunch of functions all of which have growth rate at at most exponential will work: their logarithms will have the same growth rate. Take $f(x)=(\log_3 x)(x^3-4x^2+1)3^x$ and $g(x)=(\log x)(x^2)2^x$ for example. These functions have different growth rates, again, but their logarithms have the same growth rate. The logarithm turns the multiplicative structure into an additive structure dominated by a first-order polynomial.
Best Answer
Since it doesn't take long for $2^{n/8}$ to get larger than $n$, it would be reasonable to experiment to find roughly where this happens, using multiples of $8$ to make it easy. E.g., $40\gt 2^5$, but $48<2^6$. The smallest solution (not counting $n=1$) is thus somewhere between $41$ and $48$. It is geometrically clear from the shape of the curves $y=x$ and $y=2^{x/8}$ that all greater $n$ will work, but to prove it you could use mathematical induction.
Claim: If $n\geq 44$, then $n\lt 2^{n/8}$.
Proof: The base case is $n=44$, for which I'll omit the verification. Suppose for some $n\geq 44$ that $n\lt 2^{n/8}$. Then $2^{(n+1)/8}=2^{1/8}2^{n/8}\gt 2^{1/8}n$. Since $n\geq 44$, $\frac{n+1}{n}=1+\frac{1}{n}\leq 1+\frac{1}{44}\lt 2^{1/8}$. Thus $2^{1/8}n\gt n+1$, and combining with the previous inequality completes the inductive step.
I'll also omit the verification that $43\gt 2^{43/8}$.