[Tex/LaTex] How to write a pseudo-algorithm with algorithm2e package inside a tcolorbox environment

algorithm2etcolorbox

I've recently seen in the second edition of Reinforcement Learning: an introduction by Sutton and Barto an appealing way to display pseudo-algorithms. In the following an example image.

algo

I think that the environment is done with tcolorbox package and I think that I should be able to make something similar. However, I like to put my pseudo-algorithm in an algorithm environment through algorithm2e package. There is a way to mix the two things? The ideal case would be to have the background color, the box, and the captions the same as the image and the internal structure of the algorithm environment of algorithm2e.

This is the code of what I've done so far:

\documentclass{article}
\usepackage[utf8]{inputenc}
\usepackage[ruled,longend]{algorithm2e}
\usepackage{textgreek}
\usepackage{amssymb}

\begin{document}
\begin{algorithm}
Algorithm parameters: step size $\alpha \in (0, 1]$, small $epsilon > 0$\;
Initialize $Q(s, a)$, for all $s \in \mathcal{S}^+, a \in \mathcal{A}(s)$, arbitrarily except that $Q(\mathrm{terminal}, \cdot) = 0$\;

\ForEach{episode}{
    Initialize S\;
    \ForEach{step of episode}{
        Choose $A$ from $S$ using policy derived from $Q$ (e.g., \textepsilon-greedy)\;
        Take action $A$, observe $R$, $S'$\;
        $Q(S, A) \leftarrow Q(S, A) + \alpha [R + \gamma \max_a Q(S', a) - Q(S, A)]$\;
        $S \leftarrow S'$\;
    }
}
\caption{Q-learning (off-policy TD control) for estimating $\pi \approx \pi_*$}
\end{algorithm}
\end{document}  

This is the result:

algo2

Basically, it's only the algorithm part. I don't know how to start to change its appearance since in the documentation there are very few commands to change the visual effect and none of them seems helpful.

Best Answer

I missed the part in the algorithm2e manual when it is stated that the option H makes the environment non-floatable, thus it could be put inside the tcolorbox environment.

This is the updated code:

\documentclass{article}
\usepackage[utf8]{inputenc}
\usepackage[longend]{algorithm2e}
\usepackage{textgreek}
\usepackage{amssymb}
\usepackage{tcolorbox}

\begin{document}
\begin{tcolorbox}[fonttitle=\bfseries, title=Q-learning (off-policy TD control) for estimating $\pi \approx \pi_*$]
\begin{algorithm}[H]
Algorithm parameters: step size $\alpha \in (0, 1]$, small $epsilon > 0$\;
Initialize $Q(s, a)$, for all $s \in \mathcal{S}^+, a \in \mathcal{A}(s)$, arbitrarily except that $Q(\mathrm{terminal}, \cdot) = 0$\;

\ForEach{episode}{
    Initialize S\;
    \ForEach{step of episode}{
        Choose $A$ from $S$ using policy derived from $Q$ (e.g., \textepsilon-greedy)\;
        Take action $A$, observe $R$, $S'$\;
        $Q(S, A) \leftarrow Q(S, A) + \alpha [R + \gamma \max_a Q(S', a) - Q(S, A)]$\;
        $S \leftarrow S'$\;
    }
}
\end{algorithm}
\end{tcolorbox}
\end{document}

This is the visual result:

algo3


Usually, I don't post self-answered questions. This was a genuine question but I found the answer five minutes after I posted it. Since it could be helpful to others I will leave it here.