If every observation is on the same scale (i.e. the same performance metric was used each day), then in general I would not recommend normalization, since the scores already have comparable location and spread information.
Since the Z-transform of a random variable is a linear transformation that does not change the sign of the variable, and since the correlation operation is invariant to linear transformations that preserve sign, z-transforming columns and then clustering on columns with correlation distance will not be different than using the raw scores (See http://www.math.uah.edu/stat/expect/Covariance.html just after #8). Z-normalizing rows (within employee) will wipe out their average performance, which may be highly inappropriate, depending on your goals.
If you are clustering days based on employee performance, presumably you would like a cluster of "high performing days", and a separate cluster of "low performing days." If this is the case, DO NOT use correlation distance, since it ignores mean differences. For example, correlation distance assigns a very low distance (=high similarity) to day 1 with scores (11,12,13,14,15) and day 2 with scores (2,2,3,4,5); and assigns a much higher distance to day 1 and day 3 with scores (12,13,12,13,12). This is probably not the sort of result you want. You probably want something like euclidean distance.
It is imperative that you think carefully about your goals here and select normalization methods and distance metrics (not to mention clustering algorithms) accordingly.
ELKI includes a class called KMeansOutlierDetection
(and many more).
But of all the methods that I have tried, this one worked worst:
Even on easy, artificial data it doesn't work too well, except for the trivial objects (that literally any method will detect).
The problems with cluster-based outlier detection is that you need a really really good clustering result for this to work. On this data set, k-means does not work too well (the colors are not k-means clusters).
Here, k-means did not work too well, and thus you have false outliers along the bad cuts that k-means did:
Even worse, k-means is sensitive to outliers. So when you have lots of outliers, it tends to produce really bad results. You will want to first remove outliers, then run k-means; not the other way round!
You will end up having lots of outliers at the borders between clusters. But if the clusters are not good, this may well be in the very middle of the data!
Best Answer
Choose any distance based clustering algorithm.
Have a look at ELKI. First of all they have probably the largest choice in clustering algorithms, plus you can plug in arbitrary distance functions easily. They also have Pearson correlation distance along with various specialized time series distances.
Depending on your domain knowledge, DBSCAN could be a good choice. If you can define a reasonable threshold and minimum cluster size.