*tl;dr*– Yes, any function, including chaotic functions, can be decomposed into an infinite linear system, therefore infinite linear systems can be chaotic.

A trivial example would be
$$
f\left(x\right)
~=~
\sum_{\forall i}{
a_i \, \delta\left(x-p_i\right)
}
\,,$$
where $\delta\left(x\right)$ is the Dirac delta function and there's one amplitude/position tuple, $\left<a_i,\,p_i\right> ,$ for each discriminable $x ,$ $x_i .$ In other words, just think of an infinite-resolution piece-wise function.

Then $f\left(x\right)$ is non-chaotic in the special case that $\require{cancel} \left| a_i - a_{i-1} \right| ~\cancel{\!\! \gg \!\!}~ \left| p_i - p_{i-1} \right| ~~ \forall i \,,$ though $f\left(x\right)$ is almost certainly chaotic. In other words, an infinite-resolution piecewise function is non-chaotic if we select the values for each point to differ by not much more than the distance between the points, but since this is an extremely contrived special case, almost all infinite-resolution piece-wise functions are chaotic.

Since this is a linear combination of trivial atoms (I selected the Dirac delta since it's pretty easy to reproduce in most numeric systems, to help keep this explanation more general), yup, linear systems of infinitely many components can be chaotic.

Notes:

Any function $f\left(x\right)$ can be decomposed into such an infinite-resolution piece-wise function.

Chaotic behavior is observer-subjective. This appears in the above where we define chaotic-quality as being a function of $\require{cancel} \cancel{\!\! \gg \!\!} ,$ which is a fuzzy qualifier.

A linear system is a physical system responding to an external stimulation in a manner which is proportional to the amplitude of said stimulation.

Stated otherwise, it is the study of a class of systems characterized by the fact that their behavior can be modeled as a linear function:

$$f(x) = k·x.$$

Graphically, this means that if one plots how such a system $f$ responds to a variation of a variable $x$ the corresponding plot $(x, f)$ is a straight line.

One example is a spring. In the example of the spring, the variable $x$ is the amount of deformation of the system and f represents the corresponding increase in the force exerted by the spring as it is compressed by an amount $x$.

To put it yet another way, a system is said to be linear if the variation of its output is proportional to a corresponding variations of its input:

$$f(N·x) = N·f(x) \tag{homogeneity}$$
$$f(x+y) = f(x)+f(y)\tag{additivity}$$

properties known respectively as *homogeneity* and *additivity*. When both these properties are are displayed by a system it is considered a *linear* system or more formally it is regarded as satisfying the *superposition principle* :

"The superposition principle, also known as superposition property, states that, for all linear systems, the net response caused by two or more stimuli is the sum of the responses that would have been caused by each stimulus individually."

https://en.wikipedia.org/wiki/Superposition_principle

For instance, if you go on a journey and travel 10 km every day, then the amount of km covered per unit time is linear (with respect to the amount of time which has elapsed).

Why is that important :

A whole class of physical systems can be regarded as linear in a first instance : pendulums, springs, etc. but also the propagation of waves in a medium.

This in turn is handy because linear systems translate to linear equations. Linear equations are simple to solve analytically. This means that if a system is linear, at least in a first order approximation, one can solve analytically the equations which govern its evolution and therefore one can tell a lot about a system if one knows it behaves linearly with respect to some variables.

Exemples of linear systems are:

- the response of a spring to stress.
- the oscillations of a pendulum.
- vibrations in an elastic medium (propagation of waves)

Historically, it is the realization of the fact that the oscillations of a pendulum depend solely on the length of said pendulum and not on its weight that allowed clockmakers to build reliable clocks with a rather simple technology. The same applied to oscillations of springs: this lead to the conception of the first watches or chronometers, which in turn allowed for a whole new era of sea travel, relying on the use a sextant and a chronometer, to become possible.

Another reason why linear systems play an important role in physics is Taylor’s theorem which states that in first approximation the response of most systems to a sufficiently small change in its parameters is linear is a first approximation (whether it is the vibrations of a guitar string or the response of the stock market to a small perturbation).

On the the other hand, it is also interesting to study how non-linear systems behave. They are more complex to model and their equations are more complex to handle; also they do not always have straightforward analytical solutions, in which case they can only be studied by computer simulations and by experimentation, however non-linear phenomena are at the root of all complex systems from life itself to climatic feed-back loops to modeling chaotic behavior of the heartbeats to modeling chaotic behavior of the stock exchange to modeling economic systems or the weather.

Last but not least, non-linear phenomena can be very deceptive, because they are inherently more complex and go against our instinct, a famous example of that is the formula of compound interests.

Beyond oscillators and waves, linear algebra also plays an important role in in computer science (ML and AI).

## Best Answer

You can write the differential equation as a system of first order differential equations,

$$ \frac{d}{dt} \textbf y = A\ \textbf y, \tag{1} $$

where,

$$ \textbf y = \begin{bmatrix} \vec{x} \\ \dot{\vec{x}} \end{bmatrix}, \tag{2} $$

and

$$ A = \begin{bmatrix} 0 & I\\ -M^{-1}K & -M^{-1}C \end{bmatrix}. \tag{3} $$

The damped frequencies of this system can be calculated from the eigenvalues of $A$.