You want a lower bound of the probability of $[X\gt x]$ hence an upper bound of the probability of the event $A=[X\leqslant x]$. As explained by others there is little hope to achieve such a bound depending on $\mathrm E(X)$ only, which would be valid for every nonnegative random variable $X$.
However, for every decreasing bounded function $u$, $A=[u(X)\geqslant u(x)]$ hence Markov's inequality yields
$$
\mathrm P(A)\leqslant u(x)^{-1}\mathrm E(u(X)).
$$
Two frequently used cases are $$u(x)=\mathrm e^{-tx}$$ and $$u(x)=\frac1{1+tx}$$ for some positive $t$, related to Laplace and Stieltjes transforms, respectively. In both cases, one can choose the value of the parameter $t$ which yields an optimal, or nearly optimal, upper bound.
This yields
$$
\mathrm P(X\gt x)\geqslant 1-u(x)^{-1}\mathrm E(u(X)).
$$
A simple consequence is the fact that, for every positive $s$ (and for $s=0$ as well, provided $1/X$ is integrable),
$$
\mathrm P(X\gt x)\geqslant \mathrm E\left(\frac{X-x}{s+X}\right).
$$
If you do not have any further assumption on the values $X$ can take (e.g., is it lower bounded a.s.?), then you cannot get any meaningful lower bound. For any $\varepsilon\in[0,1]$ (and wlog the case $\mu=0$), consider the random variable defined by
$$
X = \begin{cases}
-x\frac{1-\varepsilon}{\varepsilon} & \text{ w.p. } \varepsilon \\
x & \text{ w.p. } 1-\varepsilon
\end{cases}
$$
where $x = \sigma\sqrt{\frac{\varepsilon}{1-\varepsilon}}$.
You have $\mathbb{E} X = 0 = \mu$, and $\operatorname{Var} X = \sigma^2$; yet $\mathbb{P}\{ X < \mu\} = \varepsilon$ can be arbitrarily small.
Assuming $X \geq 0$ a.s. (as suggested in a comment below). Even then, one cannot get a non-trivial bound. Namely,
Fix any $\mu> 0$, $\sigma^2\geq 0$. For any $a\in[0,\mu)$, there exists a random variable $X\in L^2$ such that $X\geq 0$ a.s., satisfying $$\mathbb{P}\{ X \leq a\} = 0.$$
Note that up to renormalization by $\mu$ (of the standard deviation and $a$), we can wlog assume $\mu = 1$. For fixed $\sigma,a$ as above, define $\alpha \stackrel{\rm def}{=}\frac{a+1}{2}\in(a,1)$, and let $\beta$ be the solution of the equation $$
\sigma^2 + 1 = \alpha + \beta - \alpha\beta
$$
i.e. $\beta = 1+\frac{\sigma^2}{1-\alpha} > 1$.
Let $X$ be the random variable taking values in $\{\alpha,\beta\}$, defined as
$$
X = \begin{cases}
\alpha &\text{ w.p. } \frac{\beta-1}{\beta-\alpha} \\
\beta &\text{ w.p. } \frac{1-\alpha}{\beta-\alpha} \\
\end{cases}
$$
so that indeed
$$
\begin{align}
\mathbb{E} X &= 1 \\
\operatorname{Var} X &= - \mathbb{E}[X^2]- (\mathbb{E} X)^2 \\
&= \frac{1}{\beta-\alpha}\left(\alpha^2(\beta-1) + \beta^2(1-\alpha)\right)- 1 = \alpha+\beta-\alpha\beta -1 \\
&= \sigma^2.
\end{align}
$$
$X$ satisfies all the assumptions, and yet
$$
\mathbb{P}\{X \leq a\} = \mathbb{P}\{X < \alpha\} = 0.
$$
At that point, it looks to me that one would need the assumption that $X$ be bounded to get some interesting lower bound.
Best Answer
Consider a distribution with $$P(X=m\delta)=1-\delta$$ $$P\left(X=m\tfrac{1-\delta+\delta^2}{\delta}\right)=\delta$$ then $E[X]=m$ but $P(X\gt a )$ can be made arbitrarily small by choosing $\delta \lt \frac{a}{m}$ and if necessarily smaller.
So to get something useful, you would need some constraint on $X$ such as a given maximum or a given variance.