With practice, one learns to recognize the sort of things that may go wrong with potential "vector spaces", and quickly zoom in on those. But, the thing is, it takes practice to figure this out.
Often, if one thing goes wrong, lots of things will go wrong; sometimes, it is one and only one thing that goes wrong (and it may be hard to spot). At this stage, it might actually be a good idea for you to check each axiom and see whether it is met or not met, because it will afford you a lot of practice. Even though it's enough to find one axiom that fails for something to not be a vector space, finding all the ways in which things go wrong is likely good practice at this stage.
For example, you don't say which problem "says the answer is Axiom 4", and in fact I see no problem, among the ones listed, in which $4x+1$ is even a vector! It's not a $4\times 6$ matrix, it's not a $1\times 1$ matrix, it's not a degree 3 polynomial, it's not a degree 5 polynomial, it's not a first degree polynomial whose graph passes through the origin, and it's not a quadratic function whose graph passes through the origin...
Since user6312 already got you started with Question 15, let's continue: you know it fails Axiom 1. It is not hard to verify that it satisfies axioms 2 and 3. Axiom 4 fails because the zero vector (the polynomial 0) is not in your set....
Axiom 5 is a bit tricky: strictly speaking, Axiom 5 does not even make sense if Axiom 4 fails, because there is no $\mathbf{0}$ in the first place. I would certainly score such a statement as correct. On the other hand, if you have a polynomial of degree exactly 3, $ax^3+bx^2+cx+d$, with $a\neq 0$, then you can find a polynomial of degree exactly 3 that added to it will give you the zero polynomial (which is not in the set). So you might also say Axiom 5 is "sort of" satisfied.
Axiom 6 fails: for example, $x^3$ is in your set, $c=0$ is a scalar, but $0(x^3)$ is not in your set.
It's not hard to verify that Axioms 7, 8, 9, and 10 do hold.
So for 15, the axioms that fail are Axioms 1, 4, 6, and possibly 5 (depending how you interpret it).
You'll find similar problems with 16. There's a bit more to do with 17, because you also have the condition "and passes through the origin"; be sure to take that into account. Similar with 18. As for 13 and 14, I'll spill the beans and tell you that they are vector spaces: you should verify that all the axioms hold, one by one. Be sure to not verify them "by example": it's not enough to show that for particular $4\times 6$ matrices $\mathbf{u}$ and $\mathbf{v}$ you have $\mathbf{u}+\mathbf{v}=\mathbf{v}+\mathbf{u}$: you must verify it works for all possible choices of $\mathbf{u}$ and $\mathbf{v}$. If you find yourself saying "since, for example..." chances are you're doing it wrong.
You want to see whether the sets are subspaces of the given vector spaces.
The first necessary condition to check is whether the zero vector belongs to the set: if not, we're done because the set is not a subspace.
Note that this is not sufficient, so if the zero vector is in the set we need to do other checks.
The zero vector doesn't belong to the set in number 1, nor in the set of number 4.
For numbers 2 and 3, it's easier if you recall that the null space of a linear map is a subspace; for 2 consider
$$
f\colon P_4[x]\to \mathbb{R},\qquad f(p)=p(0)+p(1)
$$
For 3 consider
$$
g\colon M_{3*3}\to M_{3*3},\qquad g(A)=A-A^t
$$
Are these maps linear?
Best Answer
You have a lot more than two questions :)
Colorspaces are esoteric — you might find other insights in the photography or video Stack sites, nevertheless color science is one of my specialties, so I'll address some of these questions.
COLORSPACES - The Primal Frontier
I am going to assume you are referring to sRGB, the standard colorspace for computer monitors and the web. It is closely related to Rec709, the colorspace for HDTV. Rec709 and sRGB have identical reg, green, blue primaries and identical white point, but they have a different TRC (transfer curve, sometimes referred to as gamma). Neither sRGB nor Rec.709 are linear as both are encoded with a piecewise TRC, for Rec709 the TRC is roughly equivalent to a power function with a 1/2.0 exponent, and sRGB is roughly 1/2.2
The monitor/TVset has an inverse TRC when displaying the signal.
sRGB is normally sent to the monitor in the form of 3 channels, Red, Green, and Blue. These are independent but you can form a cube with them and use cartesian coordinates, though in practice that is not necessarrly useful with sRGB.
Rec709 is commonly encoded as "4:2:2" and I'll explain that in a moment, but first:
Modeling Light and Perception
CIEXYZ 1931
$X\ YZ$ is another colorspace, the "granddaddy" of colorspaces you might say. It also uses "red green blue" primaries, however they are imaginary, and do not exist in reality. XYZ is device independent, it does not relate to a monitor nor a camera.
XYZ is based on a series of experiments in color perception carried out between 1924 and 1931. XYZ uses experimental data from a small group of people, mapping the gamut of human vision into a cartesian space.
Instead of being based on a device like a camera or printer or monitor, it is based around the aggregation of the experimental data, creating a "standard observer".
$Y$ is luminance, which is the light/dark of a color irrespective of hue or saturation. Luminance may be denoted as $L$ when it is an absolute measure of light, in cd/m2. Luminance $Y$ is a normalized relative value of 0 to 1 (sometimes scaled 0 to 100).
Luminance is a linear measure of light and it is not perceptually uniform relative to human visual perception of lightness/darkness/brightness. Luminance is however spectrally weighted based on our perception of different wavelengths of visible light. The $x$ and $y$ (lowercase) provide coordinates for this chromaticity diagram, the outer bounds of which are the spectral locus. $xyY$:
$CIE\ \mathit{1931}\ Chromaticities\ Diagram\ (xyY,\ Y\ not\ shown)$
Human perception is non-linear. Human vision between 8cd/m2 and 520cd/m2 follows a power curve with an exponent of 0.42 (roughly).
So, if modeling the behavior of light, then use luminance and linear math (to triple the quantity of light, multiply by 3, etc.). But if modeling the human perception of changes in light quantity, luminance needs to be transformed to a value that is linear to perception instead of linear to physical light.
CIELAB $L^*a^*b^*$
(Not to be confused with SeaLab...)
Another colorspace, CIELAB in 1976 is derived from the 1931 XYZ, but $L^*a^*b^*$ is intended to be perceptually uniform, so that a perceptual change in lightness or color can be measured as the euclidian distance from another color, and that distance is "roughly" the same for the given amount of perceived change for small color differences. (Not that accurate for some larger distances).
The simple difference is: $$ ∆E = \sqrt {\left( {L_1^*} - {L_2^*} \right) + \left({a_1^*} - {a_2^*} \right) + \left({b_1^*} - {b_2^*} \right)} $$
Here, $L^*$ is perceptual lightness, relative to the way we perceive light. the $a^*$ and $b^*$ are based on the opponent/unique colors of Red/Green ($+a^*$ is Magenta $-a^*$ is Green) and Yellow/Blue ($+b^*$ is Yellow, $-b^*$ is Blue).
$L^*a^*b^*$ is based on the opponent color aspect of human vision. While you can measure the difference via the straight line between two points, the average of two colors does not necessarily lie on that line. This brings us to another space also from 1976, CIELUV.
CIELUV $L^*u^*v^*$
$L^*u^*v^*$ uses the same $L^*$ as LAB, but the $u^*v^*$ are based on coordinates of the 1976 UCS chromaticity diagram, which is a "slightly more uniform" projection of the 1931 diagram.
$CIE\ \mathit{1976}\ UCS\ Diagram\ (uʹvʹ)$
An advantage with $L^*u^*v^*$ is that the average of two lights lies on the line between the two points in space representing those lights, per Grassmann's laws of light additivity.
The color difference equation for LUV is the same as for LAB. Though LAB is much better than LUV for surface colors. LUV is mainly useful for emitters of light.
Polar Colors
Both LAB and LUV have an extended version using polar coordinates, $L^*C^*h$, which is the same $L^*$ lightness, but with $C^*$ chroma (for a given chroma, colorfulness changes as lightness changes) and $h$ hue (as an angle).
LUV, but not LAB, also has $S_{uv}$a correlate for saturation (for a given saturation, colorfulness maintains relative to lightness). Saturation is a useful correlate for data visualization, where you'd want a constant colorfulness while lightness changes.
Newer spaces such as CIECAM02, CAM16, $J_za_zb_z$, ZCAM, and others have better uniformity and accuracy than both LAB and LUV, though LAB is still very much in use.
BACK TO THE 4:2:2
So most television uses various forms of encoding to maximize useable bandwidth. The linear $Y$ luminance is encoded with a gamma curve to create $Y´$ (Y prime) also known as luma. The chroma is sampled at half the rate and encoded relative to the Y´ luma, as CbCr.
How can we get away with a lower sampling for color information? As it happens our brain's visual processing looks at chroma at about a third the resolution of luminance. Fine details are carried in luminance.
Bound or Unbound
Now clearly if a colorspace is device referred such as sRGB, it is going to be bound by the limits of that device. But a colorspace does not have to be bound, and can even have imaginary primaries that can not be created as real colors.
You can work with a linearized RGB, where there is no gamma (i.e. gamma is 1.0). We commonly work this way in Visual Effects, in a 32 bit floating point mode, instead of the gamma encoded 8 bit integer mode of sRGB. This way, while "max red" for the monitor might be 1.0, we can far exceed that with, say, red 12.0 — the value for VFX is that we can do a lot of additive image transforms, combining images in a natural way using linear math (because light in the real world is also linear).
Why $Y\ Y´$
The reason for the little trip down colorspace lane is to indicate how gamma encoded RGB spaces are probably not ideal for vector math, that is, euclidian distances won't be uniform either for light or perception.
Linearized RGB, and CIEXYZ would be a vector space, but with values relative to light in the scene or real world, and thus not uniform relative to human perception.
Colorspaces like LAB, LUV, etc. are substantially more usable as a vector space if you want distances to be relative to perception, i.e. perceptually uniform. But even so, these attempts at uniformity do not take many of the factors of human visual perception into account.
There are more complex models, such as the Hunt model, CIECAM02, CAM16, ICAM, and others that add features to predict visual perception more.
But I think an upshot here is that there is not a single, simple 3D vector space that accurately predicts human visual perception. It needs n-dimensions (depending on application), to take into account the many factors of our complex visual system, with n being a number large enough to make an all-encompassing lookup table impractical. Other relevant aspects of visual perception of light and color include stimulus size, spatial frequency, light adaptation, local adaptation, contrast adaptation, surround effects, Helmholtz Kohlrausch effects, etc.
A demonstration of this complexity is the following illusion: the yellow dots are the exact same yellow and the squares they are on are both the same grey, in terms of the absolute sRGB values. Yet they look different due to the very context sensitive, psychophysical aspects of vision
Your Second Question
As for what are the values per system - sRGB is the standard for most computers and the internet. But there are others to be sure. Several spaces share the same blue and/or red primary, while green is most likely to be different.
Here is an example (rescaled for sRGB for viewing).
I hope that answers your questions, let me know if you have followups.