The short answer is to fit an AR(1) model & check it. If what you're left with after that is pretty much white noise, you might well be safe to assume they're AR(1) - if that's a reasonable model a priori, & depending on what it is you're wanting to do with them.
The ACF & PACF suggest, however, that there's perhaps more structure there than a simple AR(1) model. You shouldn't necessarily be bothered about the fourth lag in the PACF being just over the 5% significance level (assuming that's what the blue line is - you didn't say) - there's no correction for multiple testing, so in 20-odd lags you'd expect that. But the wavy ACF could indicate you need either to difference or to put in at least an extra AR term. Given how slowly the ACF is decaying, most likely the former.
AIC is helpful, but if you're using it in an automatic fashion, you'll often find a number of models with not much difference in AIC (a difference of less than 2 is often taken as equivalent to "just as good").
In response to the comments:
(1) Is the series stationary or not? It's hard to tell for a short, highly autocorrelated series like this. Unit root tests (KPSS & augmented Dickey-Fuller) might help (but in my experience they rarely tell you anything that isn't obvious from the correlograms & the time series plot itself). A random walk & an AR(1) model with a high AR parameter can both look plausible & pass any diagnostic tests you might perform. Only over the long term are you likely to be able to tell. NB You may have good a priori reasons to pick one or the other.
(2) If it's stationary, AR(1) or more complex model? The ACF hints at other possibilities that are worth testing, but doesn't rule out an AR(1) - remember that real ACFs from short series can look quite different from the theoretical ones. Most people would go for the simplest, at least for the time being, provided that it fits well enough (& see above about AICs). NB A priori considerations can be important here too.
The true distinction between data, is whether there exists, or not, a natural ordering of them that corresponds to real-world structures, and is relevant to the issue at hand.
Of course, the clearest (and indisputable) "natural ordering" is that of time, and hence the usual dichotomy "cross-sectional / time series". But as pointed out in the comments, we may have non-time series data that nevertheless possess a natural spatial ordering. In such a case all the concepts and tools developed in the context of time-series analysis apply here equally well, since you are supposed to realize that a meaningful spatial ordering exists, and not only preserve it, but also examine what it may imply for the series of the error term, among other things related to the whole model (like the existence of a trend, that would make the data non-stationarity for example).
For a (crude) example, assume that you collect data on number of cars that has stopped in various stop-in establishments along a highway, on a particular day (that's the dependent variable). Your regressors measure the various facilities/services each stop-in offers, and perhaps other things like distance from highway exits/entrances. These establishments are naturally ordered along the highway...
But does this matter? Should we maintain the ordering, and even wonder whether the error term is auto-correlated? Certainly: assume that some facilities/services on establishment No 1 are in reality non-functional during this particular day (this event would be captured by the error term). Cars intending to use these particular facilities/services will nevertheless stop-in, because they do not know about the problem. But they will find out about the problem, and so, because of the problem, they will also stop in the next establishment, No 2, where, if what they want is on offer, they will receive the services and they won't stop in establishment No 3 - but there is a possibility that establishment No 2 will appear expensive, and so they will, after all, try also establishment No 3: This means that the dependent variables of the three establishments may not be independent, which is equivalent to say that there is the possibility of correlation of the three corresponding error terms, and not "equally", but depending on their respective positions.
So the spatial ordering is to be preserved, and tests for autocorrelation must be executed -and they will be meaningful.
If on the other hand no such "natural" and meaningful ordering appears to be present for a specific data set, then the possible correlation between observations should not be designated as "autocorrelation" because it would be misleading, and the tools specifically developed for ordered data are inapplicable. But correlation may very well exist, although in such case, it is rather more difficult to detect and estimate it.
Best Answer
Multicollinearity can't even be defined unless you have multiple explanatory ("X") variables.
Your explanation doesn't suggest you fully understand these two terms. After all, correlation expresses collinearity. Auto-correlation merely means that you will find significant collinearity if you regress the dependent variable against itself with some lag.
Also, before you just start running tests you should assert a model (and hypothesis). I.e., ask yourself whether it is even reasonable to suggest that one explanatory variable is correlated with another, or that the predicted variable have some time interdependence with its own value.