Ok, if you are interested in slice/coslice categories then there are two obviously interesting ones:
It's pretty trivial that the category $\mathbf{Top}_\ast$ of pointed topological spaces is just $(\bullet\downarrow\mathbf{Top})$.
As another example let's find out what $(R\downarrow\mathbf{CRing})$ looks like, when $R$ is come commutative unital ring. We see that, almost by definition, we can think of the objects of $(R\downarrow\mathbf{CRing})$ as being commutative unital rings $A$ with a distinguished ring homomorphism $f:R\to A$. Ok, there's no more "simplification" that can be done there. So, what do the arrows in this comma category look like? Well, if we have two objects $f:R\to A$ and $g:R\to B$, we see that an arrow between them is an arrow $h:A\to B$ (of course, technically it's a pair of arrows, but when the left element of $(-\downarrow-)$ is discrete this is always the identity arrow, and so unimportant) such that $g=h\circ f$. But, this precisely the formulation for commutative algebras over $R$. In other words, we have commutative unital rings $A$ with specified $\mathbf{CRing}$-arrows $R\to A$ and the morphisms between two such objects $(R,A,f)$ and $(R,B,g)$ is just a $\mathbf{CRing}$-arrow $A\to B$ which respects $f$ and $g$. Thus, we see that $\left(R\downarrow\mathbf{CRing}\right)\cong R\text{-}\mathbf{CAlg}$.
There are (at least) two different sort of things you can do, roughly corresponding to syntax and semantics.
You can study the internal logic of a category, which lets you use a logical language as a convenient tool for doing calculations with in a category. e.g. objects are types, propositions are subobjects, et cetera.
Conversely, given a suitable language, we can construct its syntactic category consisting of all the types you can construct in the language and the functions you can define between them, and satisfying all of the relations you can prove hold between them. And thus we can use category theory to help us think of type theories.
On the semantic side we can think of a category $T$ (possibly with extra structure; e.g. a sketch) as being a theory. Then models of this theory in a category $C$ will be functors $F : T \to C$ of a certain type.
Of course, these aren't unrelated; if we have the theory of a group, we might then construct $T$ to be its syntactic category, and require the functors defining models of $T$ to be of the sort that preserves the logic used.
Conversely, if we're in the habit of studying functors out of some category $T$, it may be worth considering the theory given by internal language of $T$ and view functors as being models of that theory.
A common type of this construction is one we can do with universal algebra (e.g. the theory of groups, given as operations and equations). If we have some variety of universal algebra, we can take $T$ to be the opposite category of the category of finitely presented algebras, and then models will be functors from $T$ that preserve finite limits.
Best Answer
One thing to mention is that the individual slice categories can be assembled together into one structure, the arrow category $\mathbf{Set}^\to$. Every object of $\mathbf{Set}/I$ is an object of $\mathbf{Set}^\to$, the maps in $\mathbf{Set}/I$ form part of a map in $\mathbf{Set}^\to$, because the latter are commutative squares, and the map in $\mathbf{Set}/I$ forms a commutative square when paired with the identity function on $I$. But, the arrow category also has maps that mediate between objects that would have to be in different slice categories, because the square can pair the reindexing from $I$ to $J$ with the maps between the indexed families that respect the reindexing.
This 'whole structure' view forms part of the "codomain fibration", where we have a functor $\mathbf{Set}^\to \to \mathbf{Set}$ sending each arrow to its codomain, or each indexed family to its indexing set. Then the $f^*$ and $f_!$ functors can be talked about with respect to this structure, and it forms a more general setting for modelling (among other things) dependent types. You can consider fibrations that aren't particular to $\mathbf{Set}$ for instance, which gives one way of making sense of dependent types in more general categories.
The answer to your first question is that there is no specific encoding, I think. There are multiple ways of thinking about each type, and they are useful for thinking about different aspects of those types. For instance, your first example is vectors indexed by their size. In type theory notation, this might be:
$$n : ℕ ⊢ \mathsf{Vec} \; n \;\; \mathsf{type}$$
(I'm assuming there's some particular type, say integers, for the elements of the list). One way to think about this is to just take this to be the map:
$$length : List → ℕ$$
where $List$ is the set of lists of integers, which makes up the 'total space' representing the values of our type family $\mathsf{Vec}\; n$. However, we can also think of things in an indexed way, where we have something like:
$$Vec : ℕ → \mathcal{P}(List)$$
where $Vec$ is a function that maps natural numbers to the subset of lists of integers with that length. This is how we often think of things internally to type theory. Now, given this function, we can reconstruct the total space like so:
$$\int Vec = \{(n:ℕ, l) \mid l \in Vec(n)\} $$
and there is a map:
$$ π_1 : \int Vec \to ℕ$$
which just projects out the first part of the pair.
So, the reason to think in these two different ways is that they each make one of the $f$ operations kind of trivial, and the other more complicated. In the first case, $f_!$ is just post-composition, changing the reported index of the values of the total space. The total space itself doesn't change at all. However, $f^*$ is pullback, which is kind of non-obvious. It produces a new total space with a mapping to $J$ that forms a pullback square with $List$, $length$ and $f : J \to ℕ$.
In the other case, $f^*$ seems more trivial. It is just composing $f$ with $Vec$ to create a new mapping function. On the other hand, $f_!$ is complicated, and similar to the definition of the total space. If we have $f : ℕ \to J$, then:
$$(f_!Vec) j = \{ l \mid n \in ℕ, f(n) = j, l \in Vec(n)\}$$
Note that for $! : ℕ \to 1$, $!_! = \int$.
This latter view is fairly similar to how things are presented in type theory, though, so it's a pretty good guide for how things map from the categorical/fibered presentation to type theory syntax. An object in a slice category $\mathbf{Set}/I$ is a type in a context:
$$i : I ⊢ T \; \mathsf{type}$$
"Change of base" corresponds to 'precomposition' with a family, which involves substitution in the notation I'm using:
$$\frac{i : I ⊢ T \; \mathsf{type}}{j : J ⊢ T[i := f(j)] \mathsf{type}}$$
The $f_!$ operation corresponds to the $Σ$ type, although the categorical version is generalized in an inconvenient way. The simple correspondence to type theory is to consider maps like $fst : Γ × J \to Γ$. Then $fst_! : \mathbf{Set}/(Γ×J) \to \mathbf{Set}/Γ$ is written as:
$$\frac{Γ, j : J ⊢ T \; \mathsf{type}}{Γ ⊢ \Sigma_{j:J} T \; \mathsf{type}}$$
Making sense of the categorical version involves constructing inverse images, which is sort of the inverse of the $\int$ construction, although it can be done.
I think the last thing to cover is the $a \leq b$ example, but that has similar ways to think about it. The only 'complicated' part is thinking about the 'proofs'. The $Vec$ alike way says that the $p$ belong to a family ${\leq} : ℕ × ℕ \to \mathcal{P}1$, where $m \leq n$ is empty if $m$ is greater than $n$ or a singleton otherwise (that description is potentially overly simple in general). The opposite view is that the total space of the family is the subset of $\leq \; \subset ℕ × ℕ$ where the first element is less than or equal to the second, and the object in $\mathbf{Set}/(ℕ×ℕ)$ is just the inclusion map. Then the 'triples' are just composing this with the terminal map $! : ℕ×ℕ \to 1$ (or doing it in two steps if you prefer).
Hopefully this was clear enough. I can try to fix it if something isn't.