The sample space consists of seven possible outcomes: "1" through "5" on the die, "6" and "tails", and "6" and "heads." Let's abbreviate these as $\Omega=\{1,2,3,4,5,6T,6H\}$.
The events will be generated by the atoms $\{1\}, \{2\}, \ldots, \{6H\}$ and therefore all subsets of $\Omega$ are measurable.
The probability measure $\mathbb{P}$ is determined by its values on these atoms. The information in the question, together with the (reasonable) assumption that the coin toss is independent of the die throw, tells us those probabilities are as given in this table:
$$\begin{array}{lc}
\text{Outcome} & \text{Probability} \\
1 & \frac{1}{6} \\
2 & \frac{1}{6} \\
3 & \frac{1}{6} \\
4 & \frac{1}{6} \\
5 & \frac{1}{6} \\
\text{6T} & \frac{1-p}{6} \\
\text{6H} & \frac{p}{6}
\end{array}$$
A sequence of independent realizations of $X$ is a sequence $(\omega_1, \omega_2, \ldots, \omega_n, \ldots)$ all of whose elements are in $\Omega$. Let's call the set of all such sequences $\Omega^\infty$. The basic problem here lies in dealing with infinite sequences. The motivating idea behind the following solution is to keep simplifying the probability calculation until it can be reduced to computing the probability of a finite event. This is done in stages.
First, in order to discuss probabilities at all, we need to define a measure on $\Omega^\infty$ that makes events like "$6H$ occurs infinitely often" into measurable sets. This can be done in terms of "basic" sets that don't involve an infinite specification of values. Since we know how to define probabilities $\mathbb{P}_n$ on the set of finite sequences of length $n$, $\Omega^n$, let's define the "extension" of any measurable $E \subset \Omega^n$ to consist of all infinite sequences $\omega\in\Omega^\infty$ that have some element of $E$ as their prefix:
$$E^\infty = \{(\omega_i)\in\Omega^\infty\,|\, (\omega_1,\ldots,\omega_n)\in E\}.$$
The smallest sigma-algebra on $\Omega^\infty$ that contains all such sets is the one we will work with.
The probability measure $\mathbb{P}_\infty$ on $\Omega^\infty$ is determined by the finite probabilities $\mathbb{P}_n$. That is, for all $n$ and all $E\subset \Omega^n$,
$$\mathbb{P}_\infty(E^\infty) = \mathbb{P}_n(E).$$
(The preceding statements about the sigma-algebra on $\Omega^\infty$ and the measure $\mathbb{P}_\infty$ are elegant ways to carry out what will amount to limiting arguments.)
Having managed these formalities, we can do the calculations. To get started, we need to establish that it even makes sense to discuss the "probability" of $6H$ occurring infinitely often. This event can be constructed as the intersection of events of the type "$6H$ occurs at least $n$ times", for $n=1, 2, \ldots$. Because it is a countable intersection of measurable sets, it is measurable, so its probability exists.
Second, we need to compute this probability of $6H$ occurring infinitely often. One way is to compute the probability of the complementary event: what is the chance that $6H$ occurs only finitely many times? This event $E$ will be measurable, because it's the complement of a measurable set, as we have already established. $E$ can be partitioned into events $E_n$ of the form "$6H$ occurs exactly $n$ times", for $n=0, 1, 2, \ldots$. Because there are only countably many of these, the probability of $E$ will be the (countable) sum of the probabilities of the $E_n$. What are these probabilities?
Once more we can do a partition: $E_n$ breaks into events $E_{n,N}$ of the form "$6H$ occurs exactly $n$ times at roll $N$ and never occurs again." These events are disjoint and countable in number, so all we have to do (again!) is to compute their chances and add them up. But finally we have reduced the problem to a finite calculation: $\mathbb{P}_\infty(E_{n,N})$ is no greater than the chance of any finite event of the form "$6H$ occurs for the $n^\text{th}$ time at roll $N$ and does not occur between rolls $N$ and $M \gt N$." The calculation is easy because we don't really need to know the details: each time $M$ increases by $1$, the chance--whatever it may be--is further multiplied by the chance that $6H$ is not rolled, which is $1-p/6$. We thereby obtain a geometric sequence with common ratio $r = 1-p/6 \lt 1$. Regardless of the starting value, it grows arbitrarily small as $M$ gets large.
(Notice that we did not need to take a limit of probabilities: we only needed to show that the probability of $E_{n,N}$ is bounded above by numbers that converge to zero.)
Consequently $\mathbb{P}_\infty(E_{n,N})$ cannot have any value greater than $0$, whence it must equal $0$. Accordingly,
$$\mathbb{P}_\infty(E_n) = \sum_{N=0}^\infty \mathbb{P}_\infty(E_{n,N}) = 0.$$
Where are we? We have just established that for any $n \ge 0$, the chance of observing exactly $n$ outcomes of $6H$ is nil. By adding up all these zeros, we conclude that $$\mathbb{P}_\infty(E) = \sum_{n=0}^\infty \mathbb{P}_\infty(E_n) = 0.$$ This is the chance that $6H$ occurs only finitely many times. Consequently, the chance that $6H$ occurs infinitely many times is $1-0 = 1$, QED.
Every statement in the preceding paragraph is so obvious as to be intuitively trivial. The exercise of demonstrating its conclusions with some rigor, using the definitions of sigma algebras and probability measures, helps show that these definitions are the right ones for working with probabilities, even when infinite sequences are involved.
You are making this problem a lot harder than it needs to be because the
random variables in question are two-valued, and the problem can be
treated as one of independence of events rather than independence of
random variables. In what follows, I will treat the independence of
events even though the events will be stated in terms of random variables.
Let $Z_0,Z_1,Z_2,\cdots$ be independent random variables $\ldots$
I will take this as the assertion that the countably infinite collection of events $A_i = \{Z_i = +1\}$ is a collection of independent events. Now, a countable collection of events is said to be a collection of
independent events if each finite subset (of cardinality $2$ or
more) is a collection of independent events. Recall that
$n\geq 2$ events $B_0, B_1, \cdots, B_{n-1}$ are said to be independent events
if
$$P(B_0\cap B_1\cap \cdots \cap B_{n-1})
= P(B_0)P(B_1) \cdots P(B_{n-1})$$
and every finite subset of two or more of these events is a
collection of independent events. Alternatively,
$B_0, B_1, \cdots, B_{n-1}$ are said to be independent events
if the following $2^n$ equations hold:
$$P(B_0^*\cap B_1^*\cap \cdots \cap B_{n-1}^*)
= P(B_0^*)P(B_1^*)\cdots P(B_{n-1}^*)\tag{1}$$
Note that in $(1)$, $B_i^*$ stands for $B_i$ or $B_i^c$
(same on both sides of $(1)$) and the $2^n$ choices
($B_i$ or $B_i^c$) give us the $2^n$ equations.
For our application, $A_i = \{Z_i = +1\}$ and $A_i^c = \{Z_i=-1\}$,
and so checking whether the $2^n$ equations
$$P(A_0^*\cap A_1^*\cap \cdots \cap A_{n-1}^*)
= P(A_0^*)P(A_1^*)\cdots P(A_{n-1}^*)\tag{2}$$
hold or not, is equivalent to checking that the
joint probability mass function (pmf) of $Z_0, Z_1, \cdots, Z_{n-1}$
factors into the product of the $n$ marginal pmfs at each and
every one of the points $(\pm 1, \pm 1, \cdots, \pm 1)$ which is
what you would be doing if you had never heard of independent
events, just about independent random variables.
Thus, the statement
Let $Z_0,Z_1,Z_2,\cdots$ be independent random variables $\ldots$
does mean, among other things, that $Z_0,Z_1,Z_2,\cdots, Z_{n-1}$
is a finite collection of independent random variables. But,
does the assertion
For all $n \geq 2$, $\{Z_0,Z_1,Z_2,\cdots, Z_{n-1}\}$ is a set
of $n$ independent random variables
imply that the
countably infinite set $\{Z_0,Z_1,Z_2,\cdots \}$ is a
collection of independent random variables?
The answer is Yes, because we know by hypothesis
that some specific finite
subsets of $\{Z_0,Z_1,Z_2,\cdots \}$ are independent random
variables, while any other finite subset, say $\{Z_2, Z_5, Z_{313}\}$,
is a subset of $\{Z_0, Z_1, \cdots, Z_{313}\}$ which are independent
per the hypothesis and so the subset is also a set of independent
random variables.
In your question, with each $a_i \in \{+1, -1\}$ and
defining $b_i = \prod_{j=0}^i a_j$ which is also in $\{+1,-1\}$,
\begin{align}
P(X_0 = a_0, X_1 = a_1, \cdots, X_n = a_n)
&= P(Z_0 = a_0, Z_1 = a_0a_1, Z_2 = a_0a_1a_2, \cdots, Z_n = a_0a_1...a_n)\\
&= P(Z_0=b_0, Z_1 = b_1, \cdots, Z_n = b_n)\\
&= \prod_{i=0}^n P(Z_i = b_i)\\
&= 2^{-(n+1)}\\
&= \prod_{i=0}^n P(X_i = a_i),
\end{align}
that is, all $2^{n+1}$ equations of the form $(2)$ hold.
Thus, for each $n \geq 1$, $X_0, X_1, \cdots, X_n$ are
independent random variables, and therefore the
countably infinite collection $\{X_0, X_1, \cdots\}$
of random variables is a collection of independent
random variables.
After reading over my revised answer, perhaps it is I who is
making the problem much harder than necessary. My apologies.
Best Answer
Yes, this is a Markov chain. In general, in order to show that the process is a Markov chain, you will need to show that the transition probability depends on the "history" of the chain only through its most recent value. In the present case it is relatively simple to derive the exact form for the transition probabilities, so this can be done directly.
Your question does not specify how many sides your die has, so I am going to proceed for the general case where we have a fair die with $k \in \mathbb{N}$ sides. Outcome of the rolls are represented by the sequence $X_1,X_2,X_3,... \sim \text{IID U} \{ 1,...,k \}$ and $Z_n \equiv \max (X_1,...,X_n)$ is the maximum outcome in the first $n$ rolls. We can write the latter quantity in its recursive form as $Z_{n+1} = \max (Z_n, X_{n+1})$, which gives the inverse-relationship:
$$\begin{matrix} X_{n+1} \leqslant Z_n \quad & & & \text{if } Z_{n+1} = Z_n, \\[6pt] X_{n+1} = Z_{n+1} & & & \text{if } Z_{n+1} > Z_n. \\[6pt] \end{matrix}$$
Using the independence of the underlying sequence of uniform values, it is simple to establish that $X_{n+1} \ \bot \ (Z_1,...,Z_n)$, so for all $z=1,...,k$ we have the following transition probabilities:
$$\begin{align} T_{n+1}(z|\mathbf{z}_n) &\equiv \mathbb{P}(Z_{n+1}=z| Z_1 = z_1,...,Z_n = z_n) \\[14pt] &= \begin{cases} 0 & & & \text{if } z < z_n \\[10pt] \mathbb{P}(X_{n+1} \leqslant z| Z_1 = z_1,...,Z_n = z_n) & & & \text{if } z = z_n \\[10pt] \mathbb{P}(X_{n+1} = z| Z_1 = z_1,...,Z_n = z_n) & & & \text{if } z > z_n \\[10pt] \end{cases} \\[14pt] &= \begin{cases} 0 & & & \text{if } z < z_n \\[10pt] \mathbb{P}(X_{n+1} \leqslant z) & & & \text{if } z = z_n \\[10pt] \mathbb{P}(X_{n+1} = z) & & & \text{if } z > z_n \\[10pt] \end{cases} \\[14pt] &= \begin{cases} 0 & & & \text{if } z < z_n \\[10pt] z/k & & & \text{if } z = z_n \\[10pt] 1/k & & & \text{if } z > z_n \\[10pt] \end{cases} \\[14pt] &= \frac{1}{k} \cdot \mathbb{I}(z > z_n) + \frac{z}{k} \cdot \mathbb{I}(z = z_n). \\[6pt] \end{align}$$
Since this transition probability depends on the history $\mathbf{z}_n$ only through the value $z_n$, you have a Markov chain.