The CDF is
$$F(x) =
\begin{cases}0, &\textrm{if } x < 0 \\
\frac13 x, & \textrm{if } 0\leq x<1 \\
\frac13, & \textrm{if } 1\leq x <2 \\
\frac13 x - \frac13, & \textrm{if } 2\leq x< 4 \\
1, & \textrm{if } x \geq 4
\end{cases}$$
So you can see that your median $x_{0.50} = F^{-1}(\frac12)$ is found by solving
$$\tfrac13 x_{0.50} - \tfrac13 = \tfrac12$$
$$\tfrac13 x_{0.50} = \tfrac56$$
$$x_{0.50}=\boxed{\tfrac52}$$
and the $25^{\textrm{th}}$ percentile $x_{0.25}=F^{-1}(\frac14)$ is found by solving
$$\tfrac13 x_{0.25} = \tfrac14$$
$$x_{0.25}=\boxed{\tfrac34}$$
How did we know which piece of the definition of $F$ to use in each case? By observing that the first piece produces values of $F$ no more than $\frac13$ (that's where $0.25$ lies) and the third piece produces values of $F$ at least $\frac13$ (that's where $0.5$ lies).
In fact, this shows how to compute nearly all percentiles. For $0<p<1$ (except for $p=\tfrac13$), you have
$$x_p = F^{-1}(p) =
\begin{cases}3p, & \textrm{if } 0\leq p<\tfrac13 \\
3p+1, & \textrm{if } \tfrac13 <p\leq 1
\end{cases}$$
For the case $p=\tfrac13$, any number in $[1,2]$ might be considered a valid value for $x_{1/3}$, and different authors define it differently.
The issue that you rised with this question is in the area of robust statistics. In the case of estimating a parameter, it is called robust parameter estimation. There is a good book by Huber. I think this one will help alot.
The idea is as follows. When you are estimating a parameter, the regular process first finds the log likelihood ratio of the density function. Then tries to find the parameter such that the log likelihood function is maximized. Therefore, it is called maximum likelihood estimator (MLE). In many practical applications, the data under the test contains some outliers, which are the data samples that are inherently wrong and which do not follow the given density function. This can happen for example when a patient's EEG data is recorded and the patient moves his/her head involuntarily.
Let $f$ be the density function and there are $n$ data samples, each denoted by $x_i$. The maximum likelihood estimator is found by solving
$$\hat\mu=\arg\max_{\mu}\sum_{i=1}^n \log f(x_i,\mu)$$
The idea is to replace $\log f$ with some nice function $\rho$. Then the problem is
$$\hat\mu=\arg\max_{\mu}\sum_{i=1}^n \rho(x_i,\mu)$$
Assume that the interested parameter is the mean value of the distribution function. In robust estimation context, it is called the location parameter. For this case one can write
$$\hat\mu=\arg\max_{\mu}\sum_{i=1}^n \rho(x_i-\mu)$$
Now as an example, if $\rho(x)=x^2$. Then this corrsponds to the maximum likelihood estimator of the location parameter of Gaussian distribution. If you just take the derivative and make it equal to $0$, you will find
$$\hat\mu=\frac{1}{n}\sum_{i=1}^n x_i$$
If you choose $\rho$ differently? For example if you choose $\rho(x)=|x|$, this corresponds to a very robust estimator. This is actually the maximum likelihood estimator for meadian. But For Gaussian distribution, mean and median are the same and it depends on how much problem you have at the tails of the distribution. By Huber, there is a very nice transition from mean to median by the function
$$\rho(x)=\begin{cases}x^2\quad \mathrm{if}\quad |x|<c\\c(2|x|-c)\quad \mathrm{if}\quad \mathrm{otherwise} \end{cases}$$
With this nice function, one can trade of the strength of the estimator against outliers. In other words, if $c\to 0$, this estimator is simply the median estimator and if $c\to \infty$, it is the MLE of location estimator.
Coming back to your question, if you are completely sure that higher absolute values of your observations are clean and following the Gaussian distribution, then you must use all data points.
If you know that your data may be contaminated, then one needs to consider robust estimators. There is a trade-off between robustness and the efficiency, this can be adjusted by choosing a suitable value of $c$ as given above.
Best Answer
For a correct definition of the quantiles, you must use the (empirical) $\text{cdf}$ of the distribution, i.e.
$$\text{cdf}_X(x)=\frac{\#\{k:x_k\le x\}}n.$$
When $X=\{1,2,3\}$, the $50\%$ percentile is $2$, because $\dfrac{\#\{1\}\le50\%}3$, while $\dfrac{\#\{1,2\}}3>50\%$, the "jump" occurs at $2$ and this is where the $\text{cdf}$ meets the $50\%$ horizontal.
If you don't adopt such a definition, neither the centiles nor the median would exist !
The median, the central quartile and the $50\%$ centile are exact synonyms.