Visualization: The hyperplane $H_m^n$ is parallel to one facet of the simplex, and a simplex is a cone; their intersection is a scaled-down copy of the facet. The vertex of the intersection corresponding to $e_1$ is closer to $e_1$ than the rest of the vertices, so a sphere of suitable radius centred at $e_1$ will have one vertex inside and the rest outside, and so will meet the intersection at more than one point.
Computation: Let $e_i$ denote the standard basis vectors. The set $\triangle^{n-1}\cap H_m^n$ is convex and contains the points
$$ \{ me_i + (1-m)e_n : i=1,\dotsc,n-1 \} $$
(In fact it is exactly the convex hull of these points, but we won't need that fact.) Let $v_i = me_i + (1-m)e_n$. We have
$$ \|v_i-e_1\| = \begin{cases}
\sqrt2(1-m) &\text{if $i=1$,} \\
\sqrt2\sqrt{(1-m)^2+m} &\text{if $i\ne 1$.}
\end{cases} $$
So if we take $k\in[0,1]$ such that
$$ \sqrt2(1-m) < k < \sqrt2\sqrt{(1-m)^2+m} $$
(and $m\in[1-\frac1{\sqrt2},1]$ so that this is possible), then by continuity the sphere will meet $\triangle^{n-1}\cap H_m^n$ at points of the form $(1-\lambda_i)v_1+\lambda_iv_i$ for each $i=2,\dotsc,n-1$, for suitable $\lambda_i\in[0,1]$.
Unpacking that last part a bit as requested: Define $f\colon[0,1]\to\mathbb R^n$ by $f(\lambda) = \|(1-\lambda)v_1+\lambda v_i-e_1\|$. Then
$$ f(0) = \sqrt2(1-m) < k < \sqrt2\sqrt{(1-m)^2+m} = f(1) $$
By the intermediate value theorem, there exists $\lambda_i\in[0,1]$ such that $f(\lambda_i)=k$, which means $(1-\lambda_i)v_1+\lambda_i v_i\in S_k^n$.
When one searches on the internet for "surface of a simplex", one tends to find this site first, but no real answer is given, so I will post the three solutions I came up with, which are based on rotation, reflection, and projection, respectively.
As already mentioned, the surface of the simplex is the sum of the surface of its facets. The surface of a facet is the volume of the simplex it represents in (N-1)-dimensional space. The formula for a simplex volume can be found in many places, (e.g. https://en.wikipedia.org/wiki/Simplex#Volume) and is assumed known. The facets of a simplex are simply the enumeration of the sets of simplex vertices dropping 1 vertex at the time.
The difficulty then lies in reducing the number of dimensions of the facet vertices coordinates by one, so we can apply the (N-1)D volume formula. I came up with three options. All of them rely on having the normal vector to the plane the facet lies in. Note that by definition the facet is always in a plane (e.g. 3 points in 3D space). Getting the normal vector to the plane these vertices define is discussed elsewhere as well, and need not be repeated here (see "generalized cross-product", which involves calculating N determinants from N (N-1)x(N-1)-matrices, i.e. one det() for each coordinate axis). I will also assume the normal vector we have is normalized (length 1).
The first option one has to drop a dimension is to find that rotation which maps the normal vector onto one of the coordinate axis. Then we can apply that rotation to the facet coordinates, which will rotate the facet to a plane perpendicular to the chosen coordinate axis, i.e. all coordinate values of the facet vertices along that axis will be the same. We can therefore drop that dimension. The difficulty is in finding the original rotation needed to rotate the normal vector onto the coordinate axis. N-dimensional rotations are hard, and typically require first increasing the number of dimensions by 1. I chose not to implement this.
A second option one has is to reflect the facet across a well-chosen mirror plane in such a way that the facet maps onto a plane perpendicular to a particular coordinate axis; for implementation simplicity I typically choose the last axis. Given the normal and the coordinate axis unit vector, finding the reflection plane is easy: its normal is basically just the difference vector between these two given vectors. Using this reflection plane normal, one can construct a reflection matrix, a so-called Householder matrix. Details can be found on the wikipedia page on Householder transformation (https://en.wikipedia.org/wiki/Householder_transformation). Using this matrix, one can reflect the facet onto a plane perpendicular to the chosen coordinate axis, allowing us to drop that coordinate to reduce to an (N-1)-dimensional space. It's easy to implement with some matrix math. I had one difficulty with my own implementation: when I swap vertices, the sign of the surface should invert; this did not happen for me, so I probably missed a $\pm1$ factor somewhere. In the end, I do not use this method for production either.
The third and simplest option I could come up with is to project the facet onto one of the coordinate system planes. This requires no matrix math other than calculating the normal vector of the facet and the volume of the (N-1) simplex. First one choses which coordinate plane one wants to project on. The normal vector is "most (anti)parallel" to one of the coordinate axis; this is the dimension where its component is largest in absolute value. The maximum angle to that axis is $\pi$/4 (45 degrees). Hence, choosing the coordinate plane perpendicular to this coordinate axis as the plane of projection gives the smallest projection distortion of all coordinate system planes, which reduced round-off errors and catastrophic edge cases (e.g. projection onto a line). The actual projection onto that plane is easy: simply drop that coordinate from all facet vertex coordinate vectors. Obviously, unless the original normal is exactly (anti)parallel to the chosen coordinate axis, the projected facet is distorted, more specifically it is scaled down along the direction of the projected normal, by a scaling factor that depends on the angle between the original normal and the projected normal. Calculate the volume from the (N-1)D simplex; this gives you the surface of the (scaled) projected facet. Then we scale the surface area back up to the size of the original facet. The length of the projected normal (which is the length of the normal vector with the chosed axis coordinate dropped) divided by the length of the original normal (1 here as we use a normalized length) is the sinus of the angle between the normal vector and the projected normal vector. The cosine of that angle gives us the distortion of the facet along the direction of the projected normal. Hence the projected facet scaling = cos(arcsin(length_projected_normal)), always a value between 1 and 0.707106781 (i.e. cos($\pi$/4)). We scale the predicted surface area up by dividing the calculated scaled surface area by this number. Our choice of projection coordinate plane ensures that the round-off errors are expected to be minimal, i.e. it avoids projecting the facet by a very oblique angle or even on a single line. This third method is my preferred method as it is easy to implement and is computationally the lightest method.
Best Answer
Let $\Delta^n\subseteq \mathbb{R}^{n+1}$ be the standard $n$-simplex: $$\Delta^n:=\big\{(x_0,x_1,x_2,\ldots,x_n)\in\mathbb{R}_{\geq 0}^{n+1}\,\big|\,x_0+x_1+x_2+\ldots+x_n=1\big\}.$$ An arbitrary function $f:\Delta^n\to \mathbb{R}$ can be considered as a function in $n$ free variables. That is, there exists a unique function $F:\Sigma^n\to \mathbb{R}$ such that $$f(x_0,x_1,x_2,\ldots,x_n)=F(x_1,x_2,\ldots,x_n)$$ for all $(x_0,x_1,x_2,\ldots,x_n)\in\Delta^n$, where $$\Sigma^n:=\big\{(x_1,x_2,\ldots,x_n)\subseteq\mathbb{R}_{\geq 0}^n\,\big|\,x_1+x_2+\ldots+x_n\leq1\big\}\subseteq \mathbb{R}^n\,.$$ In other words, $F$ is given by $$F(x_1,x_2,\ldots,x_n):=f\left(1-x_1-x_2-\ldots-x_n,x_1,x_2,\ldots,x_n\right)\,,$$ for all $(x_0,x_1,x_2,\ldots,x_n)\in\Delta^n$. That is, if $\nu_n$ is the volume measure on $\Delta^n$ and $\lambda_n$ is the Lebesgue measure on $\mathbb{R}^n$, then $$\int_{\Delta^n}\,f\,\text{d}\nu_n=\sqrt{n+1}\,\int_{\Sigma^n}\,F\,\text{d}\lambda_n\,.$$ This is because $$\text{d}\nu_n(x_0,x_1,x_2,\ldots,x_n)=\sqrt{n+1}\,\text{d}\lambda_n(x_1,x_2,\ldots,x_n)$$ for all $(x_0,x_1,x_2,\ldots,x_n)\in\Delta^n$ (this can be proven using the Jacobian determinant). In particular, if $f\equiv 1$ so that $F\equiv 1$, then we get $$\text{vol}_n\left(\Delta^n\right)=\sqrt{n+1}\,\text{vol}_n\left(\Sigma^n\right)=\frac{\sqrt{n+1}}{n!}\,,$$ where $\text{vol}_n$ is the $n$-dimensional volume.
To clarify some points, first note that there exists an isometry from the affine hyperplane $$H^n:=\big\{(x_0,x_1,x_2,\ldots,x_n)\in\mathbb{R}^{n+1}\,\big|\,x_0+x_1+x_2+\ldots+x_n=1\big\}$$ to $\mathbb{R}^n$. We can take the isometry to be the unique affine map $\varphi:H^n\to\mathbb{R}^{n}$ which sends $e_0,e_1,e_2,\ldots,e_n\in\mathbb{R}^{n+1}$ to $$0,E_1+\alpha_n\, E,E_2+\alpha_n\, E,\ldots,E_n+\alpha_n\, E\in\mathbb{R}^n\,,$$ respectively, where $e_0,e_1,e_2,\ldots,e_n$ are standard basis vectors of $\mathbb{R}^{n+1}$, $E_1,E_2,\ldots,E_n$ are standard basis vectors of $\mathbb{R}^n$, $E:=E_1+E_2+\ldots+E_n$, and $$\alpha_n:=\frac{\sqrt{n+1}-1}{n}\,.$$ Write $E_0:=-\alpha_n\,E$. Let $T:\mathbb{R}^n\to\mathbb{R}^n$ be the unique linear transformation that sends $$E_1,E_2,\ldots,E_n\text{ to }E_1-E_0,E_2-E_0,\ldots,E_n-E_0\,,$$ respectively. Prove that $$\det(T)=1+n\,\alpha_n=\sqrt{n+1}\,.$$ The volume measure $\nu_n$ on $\Delta^n$ is inherited from the volume measure on $H^n$, which is the pullback $\varphi^*\lambda_n$ of $\lambda_n$. However, since $T$ maps the extreme points $0,E_1,E_2,\ldots,E_n$ of $\Sigma^n$ to the extreme points $0,E_1-E_0,E_2-E_0,\ldots,E_n-E_0$ of $\varphi(H^n)$, it is simpler to take the integral on $\Sigma^n$, using the Change-of-Variables Theorem. That is, $$\begin{align}\text{d}\nu_n(x_0,x_1,\ldots,x_n)&=\text{d}(\varphi^*\lambda_n)\left(x_0,x_1,\ldots,x_n\right)\\&=\text{d}\lambda_n\big(\varphi\left(x_0,x_1,\ldots,x_n\right)\big)\\ &=\text{d}\lambda_n\left(x_1+\alpha_n\,\sum_{i=1}^n\,x_i,x_2+\alpha_n\,\sum_{i=1}^n\,x_i,\ldots,x_n+\alpha_n\,\sum_{i=1}^n\,x_i\right) \\&=\text{d}\lambda_n\big(T(x_1,x_2,\ldots,x_n)\big)\\&=\text{d}(T^*\lambda_n)(x_1,x_2,\ldots,x_n)\\&=\det(T)\,\text{d}\lambda_n(x_1,x_2,\ldots,x_n)\\&=\sqrt{n+1}\,\text{d}\lambda_n(x_1,x_2,\ldots,x_n)\end{align}$$ for all $(x_0,x_1,\ldots,x_n)\in\Delta^n$.