[This answers the original edition of the question, which did not assume $X_1,X_2,X_3,\ldots$ are independent.]
No. Suppose $R\sim\mathrm{Uniform}(0,1)$ and $(Y_1,Y_2,Y_3,\ldots)\mid R\sim\mathrm{i.i.d. Bernoulli}(R)$.
Then the strong law of large numbers implies that
$$
\Pr\left( \lim_{n\to\infty} \frac{Y_1+\cdots+Y_n} n= R \mid R\right) = 1.
$$
So
\begin{align}
& \Pr\left( \lim_{n\to\infty} \frac{Y_1+\cdots+Y_n} n= R \right) \\[10pt]
= {} & \operatorname{E} \left( \Pr\left( \lim_{n\to\infty} \frac{Y_1+\cdots+Y_n} n= R \mid R\right) \right) = \operatorname{E}(1) = 1.
\end{align}
Let $X_n= (Y_1+\cdots+Y_n)/n$. The event that $\lim_{n\to\infty} X_n = r$ is in the tail sigma-algebra of $X_1,X_2,X_3,\ldots$. But $X_1$ is not independent of that event, since $\Pr(X_1=1\mid R=r)=r$.
If you want a tail event whose probability is positive, observe that $\lim\limits_{n\to\infty} X_n \overset{\text{a.s.}}= R \sim\mathrm{Uniform}(0,1)$, so $\Pr(X_1=1) = \operatorname{E}(\Pr(X_1=1\mid \lim\limits_{n\to\infty} X_n)) = \operatorname{E}(\lim\limits_{n\to\infty} X_n) = 1/2$, and find $\Pr(X_1=1\mid \lim\limits_{n\to\infty} X_n>1/2)$.
1)
If by $\Omega_X$ you mean $\mathbb R$ and by $P_X$ you mean the probability measure prescribed by $B\mapsto P(X\in B)$ then: "yes, we usually go for $\mathcal F_X=\mathcal B(\mathbb R)$".
$X$ is usually by definition a random variable if it is a function that takes real values and is measurable wrt the Borel $\sigma$-algebra.
2)
Looking at $X_1$ the preimage of $B\in\mathcal B(\mathbb R)$ wrt $X_1$ is the set:$$X_1^{-1}(B)=\{(i,j)\in\{1,2,3,4,5,6\}^2\mid max(i,j)\in B\}$$
If you take singleton $B=\{x\}$ then we get:$$X_1^{-1}(B)=\{(i,j)\in\{1,2,3,4,5,6\}^2\mid max(i,j)=x\}$$
For $X_2$ you will get similar expressions where $\max(i,j)$ is replaced by $i+j$.
You cannot speak of "the preimages of $X_1^{-1}(B)$".
The correct wording is that "$X_1^{-1}(B)$ is the preimage of $B$ under (or with respect to) $X_1$".
The $\sigma$-algebra generated by the events $A_k$ is formally the smallest $\sigma$-algebra on $\Omega$ that contains these sets
Now observe that every preimage $X_1^{-1}(B)$ can be written as a union of these sets. This tells us that $X_1$ is measurable wrt to this $\sigma$-algebra. That means that $X_1$ can be classified as a random variable if $\Omega$ is equippes with the $\sigma$-algebra. Actually the $\sigma$-algebra generated by the $A_k$ can be shown to be the collection $$X_1^{-1}(\mathcal B(\mathbb R)):=\{X_1^{-1}(B)\mid B\in\mathcal B(\mathbb R)\}=$$$$\{\{(i,j)\in\Omega\mid \max(i,j)\in B\}\mid B\in\mathcal B(\mathbb R)\}\tag1$$
However it is not possible to write e.g. $X_2^{-1}(\{4\})=\{(i,j)\mid i+j=4\}$ as an element of this $\sigma$-algebra.
That's why $X_2$ cannot be classified as a random variable.
edit:
Working $(1)$ out we find that for every $B\in\mathcal B(\mathbb R)$ we can find a set $I=I_B\subseteq\{1,2,3,4,5,6\}$ such that: $$X_1^{-1}(B)=\bigcup_{i\in I}A_i$$
So actually: $$X_1^{-1}(\mathcal B(\mathbb R))=\left\{\bigcup_{i\in I}A_i\mid I\subseteq\{1,2,3,4,5,6\}\right\}$$
For e.g. preimage $X_2^{-1}(\{4\})=\{(1,3),(2,2),(3,1)\}$ we cannot find such a set $I$.
Best Answer
I will first answer your last question: given $X$ and $Y$ (as a vector), the random variable $XY$ is completely determined. So, you can easily finda three sets $A,$ $B$ and $C$ such that $\mathbf{P}(XY \in A \mid X \in B, Y \in C) \neq \mathbf{P}(XY \in A).$
For your first question, I will give you some results, I am quite unsure which one is the one you are missing.
Suppose $X$ and $Y$ are two discrete random variables and let $\Sigma(X)$ and $\Sigma(Y)$ the corresponding sigma algebras they generate.
For a function $f$ to be measurable from the measurable space $(X, \mathbf{X})$ to the measurable space $(Y, \mathbf{Y})$ it is necessary and sufficient that the set $f^{-1}(\mathbf{Y}) := \{f^{-1}(B) \mid B \in \mathbf{Y}\}$ be contained in $\mathbf{X}.$ This is just restating the definition of measurability. Now, $f^{-1}(\mathbf{Y})$ is a sigma algebra because preimage respect operations and so on. If $\mathbf{Y}$ is the sigma algebra generated by some set $\mathsf{Y} \subset 2^Y$ then $f^{-1}(\mathbf{Y})$ is the sigma algebra generated by $f^{-1}(\mathsf{Y})$ (since the Borel sigma algebra is generated by intervals of the form $(-a, \infty]$, you can apply this to $\mathbf{Y} = \text{Borel sigma algebra}$ and $\mathsf{Y} = \text{Intervals of the form } (-\infty, a]$). Since $\mathsf{Y} \subset \mathbf{Y}$ it is true that $f^{-1}(\mathsf{Y}) \subset f^{-1}(\mathbf{Y})$ and hence, as the right hand side is a sigma algebra, the sigma algebra generated by the left hand side will be contained in $f^{-1}(\mathbf{Y}).$ Let $\mathcal{Y}$ be the sigma algebra generated by $f^{-1}(\mathsf{Y}),$ I just established that $\mathcal{Y} \subset f^{-1}(\mathbf{Y});$ to prove the reverse inclusion consider the set $\bar{\mathsf{Y}}$ of subsets $B$ of $Y$ such that $f^{-1}(B) \in \mathcal{Y};$ so $\mathsf{Y} \subset \bar{\mathsf{Y}}.$ Since preimage behaves well with set operations, $\bar{\mathsf{Y}}$ is a sigma algebra that contains $\mathsf{Y},$ thus it contains the sigma algebra generated by the latter set, that is $\mathbf{Y} \subset \bar{\mathsf{Y}}.$ Therefore, for every $B \in \mathbf{Y},$ $f^{-1}(B) \in \mathcal{Y},$ thus $\mathcal{Y} = f^{-1}(\mathbf{Y}).$