If the prior distribution $\pi$ of $\theta$ and the conditional distributions $p(\ \mid\theta)$ of the observations are discrete, the posterior distribution $q(\ \mid x)$ of $\theta$ given some observations $x=(x_i)$ is
$$
q(\theta\mid x)\propto p(x\mid\theta)\pi(\theta)=\pi(\theta)\prod_ip(x_i\mid\theta),
$$
that is
$$
q(\theta\mid x)=\frac{\pi(\theta)}{z(x)}\prod_ip(x_i\mid\theta),\quad z(x)=\sum_\alpha\pi(\alpha)\prod_ip(x_i\mid\alpha).
$$
In your case, the sum over $\alpha$ has three terms for $\alpha$ in $\{4,5,6\}$ and the product over $i$ has three terms for $x_i$ in $\{0,4,3\}$.
Can you carry on from here? (Note that the loss function is irrelevant.)
As already mentioned in the comments, you appear to have some fundamental misconceptions about Bayesian inference, and this is what I intend to address in my response, since you've already received enough computational guidance on your question.
If you were given a single observation $X_1 \mid \lambda \sim \operatorname{Poisson}(\lambda)$, and you wanted to use a Bayesian approach to make an inference about the posterior distribution of $\lambda$ based on this observation and your prior beliefs, then this is intuitively and obviously different than if you had many observations in your sample. Except in certain (unusual) circumstances, the more data you are able to observe from some parametric model, the "better" your understanding about what parameter(s) generated that data.
After all, this principle is something you should be familiar with from frequentist statistics: if you suppose that your observations are generated from a normal distribution with unknown mean $\mu$ and known standard deviation $\sigma = 1$, the sample mean is an intuitive and reasonable estimator for $\mu$; moreover, the larger your sample, the more likely this estimator will tend toward the true value of $\mu$. The same principle applies in Bayesian inference.
With this in mind, it should become clear that the likelihood function for $\lambda$ given the sample, should reflect the size of the sample in some meaningful way. Your version of the likelihood fails to do that. Now, you're not obligated to consider the entire sample--effectively, by using only a single observation, you're saying "I'm going to ignore the rest of the information available to me when calculating the posterior distribution"--but the posterior you calculate will not be as informative as one that uses all of the data. Another statistician, seeing your calculation, would not hesitate to improve upon it by using the other observations to produce a superior inference on $\lambda$.
Clearly, you have some mathematical competence when working with the calculation itself, but there's a deeper conceptual gap that I would invite you to try to close. Applying formulas is easy. Understanding what the formulas mean, relating it to the underlying concepts, is the essence of statistical thinking.
Best Answer
to solve your problem consider that the posterior is
$$\pi(\theta|\mathbf{x})\propto \pi(\theta)\cdot p(\mathbf{x}|\theta)$$
So first derive your likelihood $ p(\mathbf{x}|\theta)$ and then multiply your (given) prior $\times$ likelihood and surely you will find (the kernel of) another Gamma density with different (posterior) parameters
Hint: when doing your calculations, waste any quantity not depending on $\theta$ because in a Bayesian point of view they are all constants and thus included in the normalizing constant
(1) likelihood
Simply multiply the "n" densities obtaining
$$ p(\mathbf{x}|\theta)\propto \theta^n e^{-\theta \Sigma_i|x_i|}$$
(note that I did not consider the 2 in the denominator, not depending on the parameter)
(2) Prior
$$\pi(\theta)\propto \theta^{\alpha-1} e^{-\beta\theta}$$
(same observation as in (1))
(3) Posterior
$$\pi(\theta|\mathbf{x})\propto \theta^{(\alpha+n)-1}\cdot e^{-\theta(\beta+\Sigma_i|x_i|)} $$
we immediately recognize that the posterior is still Gamma with parameters
$$\pi(\theta|\mathbf{x})\sim \text{Gamma}(\alpha+n;\beta+\Sigma_i|x_i|)$$