The standard form is sometimes defined with inequalities ($\leq$) and sometimes with equalities. If you want to begin with an algorithm it is useful to have equalities.
If all variables has to be equal or greater than $2016$ then you have to introduce slack variables like you proposed.
$x_1\geq 2016, x_2\geq 2016, x_3\geq 2016, \ldots, x_n\geq 2016$
becomes
$x_1-s_1= 2016, x_2-s_2= 2016, x_3-s_3= 2016, \ldots, x_n-s_n= 2016$
with $x_1, x_2,\dots, x_n, s_1, s_2, s_3, \ldots s_n \geq 0$
Here you get $n$ additional constraint. To get an initial solution artifitial variables have to be intoduced.
$x_1-s_1+a_1= 2016, x_2-s_2+a_2= 2016, x_3-s_3+a_3= 2016, \ldots, x_n-s_n+a_n= 2016$
with $x_1, x_2,\dots, x_n, s_1, s_2, s_3, \ldots s_n, a_1, a_2,\dots, a_n \geq 0$
The initial solution is $x_1=x_2=x_3=\ldots=x_n=s_1= s_2= s_3= \ldots s_n=0$
$a_1=a_2=a_3=\ldots=a_n=2016$
The decision variables are all non-negative.
To intodruce slack variables ($s_i$) is a good idea. Then the two constraints can be written with two matrix equalities. I assume $d$ is a $1\times n$ vector.
$$\underbrace{\begin{pmatrix}{} a_{11}&a_{12}&\ldots & a_{1n} & -s_{1} & 0 & \ldots & 0 \\ \vdots & \vdots &\vdots &\vdots & \vdots &\vdots &\vdots &\vdots \\ a_{n1}&a_{n2}&\ldots & a_{1n} & 0 & 0 & \ldots & -s_{n} \end{pmatrix}}_{n\times 2n=\color{blue}A}\cdot \underbrace{\begin{pmatrix}{} x_1\\ x_2\\\vdots \\ x_n \\1 \\1 \\ \vdots \\ 1\end{pmatrix}}_{2n \times 1=\color{blue}B}=\begin{pmatrix}{} b_1\\ b_2\\\vdots \\ b_n\end{pmatrix} $$
$$\begin{pmatrix}{}d_{n+1} & d_{n+2} & \ldots & d_{2n-1} & d_{2n} \end{pmatrix}=\color{blue}C $$ $$\begin{pmatrix}{}x_{n+1} & 0 &\ldots& 0 \\ 0&x_{n+1} &\ldots& 0 \\ \vdots & \vdots & \vdots & \vdots \\ 0 & 0& \ldots & x_{2n}\end{pmatrix}=\color{blue}D$$
$x_{i}, s_{i} \in \mathbb N_0$
Maybe a block matrix would do the job. I use the blue letters A,B,C and D from above. $O$ are Null vectors or Null matrices. The right hand side is
$$\begin{pmatrix} A0\\0C \end{pmatrix}\cdot \begin{pmatrix} B0\\0D \end{pmatrix}=\ldots$$
Best Answer
For part b), you can introduce another binary variable $x_{i,j}$ to indicate the start, together with constraints: \begin{align} x_{i,j} &\le y_{i,k} &&\text{for $k\in\{j,\dots,j+\overline{t}-1\}$} \tag1\\ x_{i,j} &\le 1 - y_{i,j-1} \tag2\\ (1 - y_{i,j-1}) + y_{i,j} - 1 &\le x_{i,j} \tag3 \end{align} Constraint $(1)$ enforces $x_{i,j} = 1 \implies \land_{k=j}^{j+\overline{t}-1} (y_{i,k} = 1)$. Constraint $(2)$ enforces $x_{i,j} = 1 \implies y_{i,j-1} = 0$. Constraint $(3)$ enforces $(y_{i,j-1} = 0 \land y_{i,j} = 1) \implies x_{i,j} = 1$.
Alternatively, you can omit $x$ and just impose $$(1 - y_{i,j-1}) + y_{i,j} - 1 \le y_{i,k} \quad\text{for $k\in\{j+1,\dots,j+\overline{t}-1\}$},$$ equivalently, $$y_{i,j} - y_{i,j-1} \le y_{i,k} \quad\text{for $k\in\{j+1,\dots,j+\overline{t}-1\}$}, \tag4$$ (where $y_{i,j}$ is treated as $0$ if $j<0$ or $j>T$), which enforces $$(y_{i,j-1} = 0 \land y_{i,j} = 1) \implies \bigwedge_{k=j+1}^{j+\overline{t}-1} (y_{i,k} = 1).$$
Yet a third approach is to enforce $$\left(y_{i,j-1} \land \lnot \bigwedge_{k=1}^{\overline{t}} y_{i,j-k}\right) \implies y_{i,j},$$ equivalently, $$\left(\lnot y_{i,j-1} \lor \bigwedge_{k=1}^{\overline{t}} y_{i,j-k}\right) \lor y_{i,j},$$ which has conjunctive normal form $$\bigwedge_{k=1}^{\overline{t}} \left(\lnot y_{i,j-1} \lor y_{i,j-k} \lor y_{i,j}\right),$$ yielding linear constraints $$1 - y_{i,j-1} + y_{i,j-k} + y_{i,j} \ge 1 \quad \text{for $k \in \{1,\dots,\overline{t}\}$}.$$ Rearranging this, we obtain $$y_{i,j} \ge y_{i,j-1} - y_{i,j-k} \quad \text{for $k \in \{1,\dots,\overline{t}\}$}. \tag5$$ Now we can recover @toronto hrb's formulation by aggregation: $$y_{i,j} \ge y_{i,j-1} - \frac{1}{\overline{t}}\sum_{k=1}^{\overline{t}} y_{i,j-k} \tag6$$ So $(6)$ is correct but weaker than $(5)$.