I think the better way to derive this is to first observe the Biot-Savart law,
$$
\mathbf B(\mathbf r)=\frac{\mu_0}{4\pi}\int\mathbf J(\mathbf r')\times\frac{\hat{r}}{r^2}\,\mathrm dV'\tag{1}
$$
Since
$$
\frac{\hat r}{r^2}=-\nabla_r\left(\frac1r\right)
$$
(your text may derive this, if not you can prove it by starting with the RHS), we can write (1) as
$$
\mathbf B(\mathbf r)=-\frac{\mu_0}{4\pi}\int\mathbf J(\mathbf r')\times\nabla_r\left(\frac{1}{r}\right)\,\mathrm dV'\tag{2}
$$
Since $\mathbf J$ is a function or $r'$ and not $r$, we can put it inside the parenthesis and swap the order of the cross product (i.e., $\mathbf J\times\nabla=-\nabla\times\mathbf J$),
$$
\mathbf B(\mathbf r)=\frac{\mu_0}{4\pi}\int\nabla_r\times\frac{\mathbf J(r')}{r}\,\mathrm dV'\tag{3}=\nabla_r\times\frac{\mu_0}{4\pi}\int\frac{\mathbf J(r')}{r}\,\mathrm dV'
$$
Then we can define the vector potential as
$$
\mathbf A(\mathbf r)=\frac{\mu_0}{4\pi}\int\frac{\mathbf J(r')}{r}\,\mathrm dV'
$$
To get
$$
\mathbf B(\mathbf r)=\nabla\times\mathbf A(\mathbf r)\tag{4}
$$
where we drop the subscript $r$ because it's implied that it's over $\mathbf r$.
That proof over, we can take the divergence of (4):
$$
\nabla\cdot\mathbf B=\nabla\cdot\nabla\times\mathbf A\equiv0
$$
by the fact that the divergence of every curl is identically zero (worth the effort to prove this).
You seem to be starting from the integral formulation of Faraday's law; i.e your hypothesis is that for every "nice" 2-dimensional surface $S$ (i.e a compact oriented smooth $2$-dimensional submanifold of $\Bbb{R}^3$), we have
\begin{align}
\int_{\partial S}\mathbf{E}(\mathbf{r},t)\cdot d\mathbf{l}&=-\frac{d}{dt}\int_S\mathbf{B}(\mathbf{r},t)\cdot d\mathbf{S}
\end{align}
I'm not sure why your professor is trying to invoke time-varying surfaces. Trying to bring in time-varying surfaces only obscures the matter (because if we consider a smoothly varying family of surfaces $\Sigma_t$, then $\frac{d}{dt}\int_{\Sigma_t}\mathbf{B}(\mathbf{r},t)\cdot d\mathbf{S}$ will be more complicated and have extra terms).
So, fix such a nice surface $S$ for the rest of the discussion. Then, Leibniz's integral rule (one of the simplest versions will suffice here) tells us that the $\frac{d}{dt}$ can be brought inside and converted into $\frac{\partial}{\partial t}$, i.e
\begin{align}
\int_{\partial S}\mathbf{E}(\mathbf{r},t)\cdot d\mathbf{l}&=\int_{S}-\frac{\partial \mathbf{B}}{\partial t}(\mathbf{r},t)\cdot d\mathbf{S}
\end{align}
Using Stokes' theorem, and rearranging, we get
\begin{align}
\int_S\left(\nabla\times\mathbf{E}+\frac{\partial \mathbf{B}}{\partial t}\right)\cdot d\mathbf{S}&= 0
\end{align}
Note that an integral of a function over a surface being equal to zero tells us nothing about the function. However, in our case, the integral is zero for EVERY possible surface $S$. Therefore, it must happen that the integrand vanishes identically, and thus
\begin{align}
\nabla \times \mathbf{E}&=-\frac{\partial \mathbf{B}}{\partial t}.
\end{align}
As for why we can swap the $\frac{d}{dt}$ with the integral to get a partial derivative, just observe the following. Define
\begin{align}
\mathbf{b}(t)&=\int_S\mathbf{B}(\mathbf{r},t)\cdot d\mathbf{S}
\end{align}
Then, writing out a difference quotient, and using linearity of integrals, we have
\begin{align}
\frac{\mathbf{b}(t+h)-\mathbf{b}(t)}{h}&=\int_S\frac{\mathbf{B}(\mathbf{r},t+h)-\mathbf{B}(\mathbf{r},t)}{h}\cdot d\mathbf{S}
\end{align}
Thus, by taking the limits $h\to 0$, we get
\begin{align}
\mathbf{b}'(t)&=\lim\limits_{h\to 0}\frac{\mathbf{b}(t+h)-\mathbf{b}(t)}{h}\\
&=\lim\limits_{h\to 0}\int_S\frac{\mathbf{B}(\mathbf{r},t+h)-\mathbf{B}(\mathbf{r},t)}{h}\cdot d\mathbf{S}\\
&=\int_S\lim_{h\to 0}\frac{\mathbf{B}(\mathbf{r},t+h)-\mathbf{B}(\mathbf{r},t)}{h}\cdot d\mathbf{S}\tag{$*$}\\
&=\int_S\frac{\partial \mathbf{B}}{\partial t}(\mathbf{r},t)\cdot d\mathbf{S}
\end{align}
In the step $(*)$, we of course have to justify why we are allowed to interchange the limits with integrals. For this one has to make some "regularity assumptions" on $\mathbf{B}$, which for all intents an purposes, you can assume are satisfied in any applications to physics (the most common way to justify the exchange is known as "dominated convergence theorem", but this is a fairly advanced theorem of calculus. Having said this, if we assume $\mathbf{B}$ is continuously differentiable, then one can provide a 'simple' proof of this, assuming one knows the basic $\epsilon,\delta$ definitions of limits).
Best Answer
The first equation is only for all closed loops, not for all contours. That’s why you can’t conclude $\vec{E}=0$. The only thing you can conclude is that $\vec{E}=\text{grad}(V)$ for some function $V$.
If instead you had a vector function $F:U\subset \Bbb{R}^n\to\Bbb{R}^n$ (where $U\subset\Bbb{R}^n$ is open) such that for all (smooth enough) contours $\Gamma$ lying in $U$, $\int_{\Gamma}F\cdot dl=0$, then you can conclude that $F=0$ identically in $U$.
Even if not directly obvious, all such statements are going to be minor modifications of the fundamental lemma of calculus of variations.
To understand this geometrically, imagine the following scenrios: