Your optimal strategy is to offer 0, i.e. don't buy the car. Intuitively, you can see this because if you pay $p$ to buy the car from X, there's a 50% chance that the car's value $v$ is less than half of $p$, but Y will only pay you $1.5\times$ $v$. You would need Y to pay you at least $2\times$ $v$ to break even on average.
You can make this more formal by defining $V$ to be the price of the car, uniformly distributed between 0 and 1 (working in units of 1000s is easier). Let $p$ be the price you offer.
Then (1) the probability of the price being more than the value is
$$P(p \ge V) = p$$
by the definition of the uniform distribution.
The profit $r$ in this case is the money Y pays you, $1.5v$, less the price you paid $p$. If we let $Q$ be a random variable for the value of the car given we only know the value is $\le p$ then we can see that $Q$ is uniformly distributed on $(0,p)$ and its expectation is therefore $p/2$. Hence the expectation of the profit $R$ is $1.5p/2 - p$.
Case (2) the probability of the price being less than the value is
$$P(p < V) = 1-p$$
and the profit $R$ in this case has an expectation of 0 because we're neither buying nor selling the car.
Hence by the law of total expectation, the expected profit is given by the profit in each case times the probability of the case
$$ \begin{align} E(R) &= E(R|\mathrm{bought})P(\mathrm{bought}) + E(R|\neg\mathrm{bought})P(\neg\mathrm{bought}) \\
&= (1.5p/2 - p) \times p \;-\; 0 \times (1-p) \\
&= -0.25p^2
\end{align}$$
This is negative so you wouldn't want to make an offer! With a price $d$ in your original units of currency your expected loss would be $0.25(d/1000)^2\times1000 = d^2/4000$.
Re your update, the probabilities look correct. However it is the amount of money you expect to make that is important, not the probability of making it. For example given these two games which one would you play?
You have a 90% chance of winning \$1, and a 10% chance of losing \$1million.
You have a 10% chance of winning \$1million, and a 90% chance of losing \$1.
I hope you can see from this that the probability of ending up with more money isn't the most important thing: it's how much more money you end up with on average (the expectation) that's important.
In your case, it's easy to see that the expectation associated with cases 1 and 3 is relatively low, because you've included case 1 where no money changes hands. If you assumed that the payoffs are similar with winning and losing, and wanted to look at probabilities alone, you would compare case 2 (buy the car and get less) with case 3 (buy the car and get more), and notice that $$P(\mathrm{case 2/lose}) = m/1500 > P(\mathrm{case 2/win}) = m/3000$$
We have two bidders who play a second-price auction (or Vickrey auction if you prefer). Let the reserve price be $r$. The bidder knows his own valuation but sees the valuation of the rival as uncertain and distributed uniformly in the unit interval.
Conjecture: Bidders with valuations below $r$ bid zero and bidders with valuation above $r$ bid their valuation. The expected utility of bidder 1 of bidding below $r$ is zero and the utility of bidding $b>r$ when his valuation is $v_1$ and bidder 2 follows this strategy is: $$U_1(b|v_1)=\int_{0}^{r} (v_1-r) dy + \int_{r}^{b} (v_1-y) dy\, ,$$ clearly, the best response of player 1 is to choose $b=v_1$ if and only if $v_1>r$. A symmetric reasoning establishes the same for player 2, hence the strategy described in the conjecture is a symmetric, Bayesian-Nash equilibrium (there are others but they are in weakly dominated strategies).
The revenue is then: \begin{align*}R&=\underbrace{\int_0^r\int_0^r 0\, dx \,dy}_{\text{no one bids}}\quad+\underbrace{2\int_0^r\int_r^1 r\,dx\,dy}_{\text{one of the bidders stays out, winner pays }r}+\underbrace{\int_r^1\int_r^1 \min(x,y)\,dx\,dy}_{\text{both active}}\\&=0+2r(r-0)(1-r)+\int_r^1\int_r^y x \,dx\,dy+\int_r^1\int_y^1y\,dx\,dy=\\&=2r^2(1-r)+\int_r^1\left.\dfrac{x^2}{2}\right|_{x=r}^y\,dy+\int_r^1 y(1-y)dy=\\&=2r^2(1-r)+\int_r^1\dfrac{y^2-r^2}{2}\,dy+\int_r^1 y(1-y)dy=\\&=2r^2(1-r)-\frac{(1-r)r^2}{2}+\int_r^1 \left(y-\frac{y^2}{2}\right)\,dy\\&= \frac 32 r^2(1-r)+\left.\left(\frac {y^2}2-\frac {y^3}6\right)\right|_{y=r}^1=\frac 32 r^2(1-r)+\left(\frac 12-\frac 16\right)-\left(\frac {r^2}2-\frac {r^3}6\right)=\\ &=\frac 13 +r^2-\frac 43 r^3\end{align*}
$$ r=\frac 12\Longrightarrow R=\frac 5{12}\quad\blacksquare$$
Remark: $R$ is increasing at $r=0$ (as in Myerson (1983) 's optimal auction, the optimal reserve price is positive)
Best Answer
If I offer the item at price $x$ then the part of the population which will accept the offer is
$$\frac{500-x}{500}$$
so what I want to maximize is the expected value of such an offer is
$$\max_x \ \ (\frac{500-x}{500})x + \frac{x}{500}\cdot200$$
because $\frac{500-x}{500}$ of the times you sell the item to someone else at price $x$ and $ \frac{x}{500}$ you keep your item and the value of it is $200$.
The maximum is achieved at $350$ so your second interpretation is the correct one!