This is not true.
(First, notational gripe: To everyone else in the world, "$X_n \in L^1(\mu)$" means "for each $n$, $\int |X_n|\,d\mu < \infty$" - it doesn't say anything about the supremum of the integrals. The right word for having $\sup_n \int|X_n|\,d\mu < \infty$ is to say that the $X_n$ are "bounded in $L^1$". And it's really weird to use $n$ as a parameter ranging over an arbitrary (compact) subset of $\mathbb{R}^k$ (not necessarily integers), and most people would denote such a set by a letter like $K$ instead of $\mathcal{N}$ (which looks like Baire space).)
Let's first see why we should not expect it to be true. The assumption that $Y$ is continuous in its first argument says that if $t_n \to t$, then $Y(t_n, \cdot) \to Y(t,\cdot)$ pointwise. If we wanted to take advantage of something like "continuous image of compact set is compact", we would want instead something like $Y(t_n, \cdot) \to Y(t, \cdot)$ weakly in $L^1$, which is a very different assumption. In particular, thanks to Dunford-Pettis, we would want $\{Y(t_n, \cdot)\}$ to be uniformly integrable.
So for a counterexample, let's try to think of a family of functions, each locally bounded, that converges pointwise but is not uniformly integrable. If they were uniformly integrable then the Vitali convergence theorem would say they would also converge in $L^1$, so we want that to fail. There are two main ways that a family of functions can converge pointwise but not in $L^1$: they can squeeze mass into sets of tiny measure, or they can push mass off to infinity. Let's go with the first way, since then we can make an example that lives on a probability space (the second way can only happen in a space of infinite measure).
I am going to take $(\mathbb{X}, \mu)$ to be the unit interval $[0,1]$ with Lebesgue measure. Also, I will take $m=1$ and $K \subset \mathbb{R}^m$ to be the unit interval $[0,1]$ as well, so that I am really just looking for a one-parameter family of measurable functions on $[0,1]$.
Let $f : [0, \infty) \to [0,\infty)$ have the following properties:
(For instance, you can just put an appropriate bump inside $(0,1)$).
Set
$$Y(t,x) = \begin{cases} \frac{1}{t} f(\frac{x}{t}), & t > 0 \\
0, & t=0.
\end{cases}$$
It would be helpful to draw a sketch of $Y(t,x)$ as a function of $x$ for several values of $t$. The idea is that $Y(t, \cdot)$ looks like a tall bump of total area 1 supported on the interval $[0, 1/t]$, and as $t \to 0$ the bump is squeezed to a point and winks out altogether.
Clearly, for each $t$, we have that $Y(t, \cdot)$ is a bounded function on $[0,1]$. A quick computation shows $\int_{\mathbb{X}} Y(t,x)\,\mu(dx) = \int_0^1 Y(t,x)\,dx = 1$ for every $t > 0$, and of course $\int_0^1 Y(0,x)\,dx = 0$. In particular $\sup_{t \in K} \int_{\mathbb{X}} |Y(t,x)|\,\mu(dx) = 1 < \infty$, so $\{Y(t, \cdot) : t \in K\}$ is bounded in $L^1$.
Now I claim $Y$ is continuous in $t$; that is, for each $x$, the function $t \mapsto Y(t,x)$ is continuous. Since $f$ is continuous, it is clear that $Y(\cdot, x)$ is continuous at every $t > 0$. Moreover, for every $t < \frac{1}{x}$, we have $Y(t,x) = 0$ (when $x=0$ this holds for every $t$), so in fact we have $\lim_{t \to 0} Y(t,x) = 0 = Y(0,x)$. Thus $Y(\cdot, x)$ is continuous at $t=0$ as well.
So $Y$ satisfies the desired assumptions. But if $\{Y(t, \cdot) : t \in [0,1] \}$ were weakly relatively compact, it would be uniformly integrable (by Dunford-Pettis). Since $Y(t, \cdot) \to Y(0, \cdot)$ pointwise as $t \to 0$, the Vitali convergence theorem would imply that $\int_0^1 Y(t,x)\,dx \to \int_0^1 Y(0,x)\,dx$ as $t\to 0$ which we know is false.
You can also see this directly: take $t_n = 1/n$ or any other sequence converging to 0 (none of whose terms actually equals 0). Suppose there is a subsequence $t_{n_k}$ (which also converges to 0) and an $X^* \in L^1([0,1], \mu)$ such that $\int_A Y(t_{n_k}, x)\,dx \to \int_A X^*(x)\,dx$ for every $A \subset [0,1]$ with positive measure.
Taking $A = \{X^* < 0\}$, since $Y(t_{n_k}, \cdot) \ge 0$ everywhere we have $\int_A Y(t_{n_k}, x)\,dx \ge 0$ and thus $\int_A X^*(x)\,dx \ge 0$. We conclude that $X^* \ge 0$ almost everywhere.
Taking $A = [1/n, 1]$ and noting that $Y(t, \cdot) = 0$ on $A$ as soon as $t < 1/n$, we conclude that $\int_A Y(t_{n_k}, x)\,dx \to 0$ and therefore $\int_A X^*(x)\,dx = 0$. That is, $\int_0^1 1_{[1/n, 1]}(x) X^*(x)\,dx \to 0$. Using monotone convergence we conclude that $\int_0^1 1_{(0,1]}(x) X^*(x) = 0$. Combining this with the fact that $\{0\}$ is a null set and $X^* \ge 0 $ almost everywhere, we conclude $X^* = 0$ almost everywhere.
But taking $A = [0,1]$ and using our assumption that none of the $t_{n_k}$ are zero, we have $\int_A Y(t_{n_k},x)\,dx = 1$ for all $k$. Thus $\int_A X^*(x)\,dx = 1$. This contradicts our conclusion that $X^* = 0$ almost everywhere.
$$\sup_{n} E[|X_n| 1_{|X_n|>M} ] \leq \sup_{n} E[|X_n|] \leq \sup_{n} E[|X_n|^p],$$
using Jensen.
You didn't apply Jensen's inequality correctly; it should read
$$\sup_{n} E[|X_n| 1_{|X_n|>M} ] \leq \sup_{n} E[|X_n|] \leq \sup_{n} \left( E[|X_n|^p] \right)^{\color{red}{\frac{1}{p}}}.$$
[...] and the claim follows by letting $M \rightarrow \infty$.
No, it's not that simple. Letting $M \to \infty$ you get
$$\lim_{M \to \infty} \sup_n \mathbb{E}(|X_n| 1_{|X_n|>M}) \leq \sup_{n \in \mathbb{N}} \|X_n\|_p,$$
but that's not good enough; you have to show that the limit equals $0$. Hint for this problem: Use Markov's inequality, i.e.
$$\mathbb{E}(|X_n| 1_{\{|X_n|>M}) \leq \frac{1}{M^{p-1}} \mathbb{E}(|X_n|^p 1_{|X_n|>M}) \leq \frac{1}{M^{p-1}} \mathbb{E}(|X_n|^p).$$
Define $$M_0:=\max_{n \in N} |X_n|.$$ Then we have $$E[|X_n| 1_{|X_n|>M_0}]= E[|X_n|\cdot 0 ] = 0,$$
No this doesn't work, because $M_0$ depends on $\omega$. Unfortunately, this means that your approach fails. Hint for this one: Using e.g. the dominated convergence theorem check first that the set $\{f\}$ is uniformly integrable. Extend the approach to finitely many integrable random variables.
When $E[\sup_n |X_n|] < \infty$, then the sequence is uniformly integrable.
Hint: By assumption, $Y := \sup_n |X_n|$ is integrable and $|X_n| \leq Y$ for all $n \in \mathbb{N}$. Consequently,
$$\mathbb{E}(|X_n| 1_{|X_n|>M}) \leq \mathbb{E}(|Y| 1_{|Y|>M}) \qquad \text{for all $M>0$ and $n \in \mathbb{N}$.}$$
Now use the fact that $\{Y\}$ is uniformly integrable (see question nr. 2).
Best Answer
First of all, the function $\varphi$ given in the hint should be well-defined: if $x\geq C_k+1$ for all $k$ then $\varphi(x)$ would be infinite hence we should have that $C_k\to\infty$.
Moreover, observe that $$ \varphi\left(\lvert X_n\rvert\right)=\sum_{k=1}^\infty \left(\lvert X_n\rvert-C_k\right)\mathbf{1}_{\{\lvert X_n\rvert>C_k\}}\leqslant \sum_{k=1}^\infty \lvert X_n\rvert \mathbf{1}_{\{\lvert X_n\rvert>C_k\}} $$ hence it would be sufficient to find $C_k$ such that $C_k\to\infty$ and for each $n$, $$ \mathbb E\left[ \lvert X_n\rvert \mathbf{1}_{\{\lvert X_n\rvert>C_k\}}\right]\leqslant 2^{-k}. $$