You may be looking for the two-sample Kolmogorov-Smirnov test, which assesses a measure of distance between the two samples' cumulative distribution functions. As such, it can be used for samples of different size. In R, look at ?ks.test
.
However, of course with datasets this large, even small deviations in the CDF will be detected as statistically significant. Whether these are clinically significant cannot be assessed by statistical tests - look at quantiles, density plots, histograms and so forth for this.
Plus, if you have a specific question you are most interested in, like whether the means differ, or the variances (assuming equal means or not), of course more specialized tests are likely available, or you might be able to perform a nonparametric test, e.g., a permutation test.
If the assumption of normality for one-way ANOVA does not hold, you can turn to a nonparametric analog to the one-way ANOVA: the Kruskal-Wallis test. Just as the assumption of normality underlying the unpaired t test may not be met, thus motivating the use of the rank sum test, onne can then use Dunn's test, or the more powerful (but less well known) Conover-Iman test to conduct post hoc pairwise tests if one rejects the omnibus Kruskal-Wallis test's null hypothesis.
In their most general form the nonparametric tests (Kruskal Wallis, rank sum, Dunn's, etc.) do not assume equal variances among groups. Instead, they test:
$$H_{0}:P(X_{A}>X_{B})=0.5$$
with
$$H_{a}:P(X_{A}>X_{B})\ne0.5$$
Or in words: the null hypothesis is that the probability that a randomly selected observation from group A is greater than a randomly selected observation from group B equals one half. The alternative is that the probability is not one half. For the Kruskal-Wallis test, the null hypothesis is that the probability that a randomly selected value from any group is greater than a randomly selected observation from any other group equals one half, with the alternative that at least one group that has a probability not equal to one half for being greater than a randomly selected value from another group.
One can interpret these as tests of location shift, median difference, or mean difference if the variances for all groups are all equal and the shapes of the distribution are the same (this is a pretty stringent requirement!), but nonparametric tests do not require such assumptions to use.
I have published a software package to perform Dunn's test for R (dunn.test), and Dunn's test for Stata (dunntest), and a software package to perform the Conover-Iman test for R (conover.test), and the Conover-Iman test for Stata (conovertest). Both packages correct for ties, and implement an array of familywise error rate and false discovery rate adjustments for multiple comparisons.
References
Dunn, O. J. (1964). Multiple comparisons using rank sums. Technometrics, 6(3):241–252.
Conover, W. J. (1999). Practical Nonparametric Statistics. Wiley, Hoboken, NJ, 3rd edition.
Conover, W. J. and Iman, R. L. (1979). On multiple-comparisons procedures. Technical Report LA-7677-MS, Los Alamos Scientific Laboratory.
Best Answer
If 2 knife types have the exact same mean, but very different variances then you would still have information useful to classification, if you are seeing a cut that lies far from the mean relative to the small variance, but reasonble from the large variance then it seems much more likely to have come from the knife with the larger variance. So focusing on differences is means when there are other differences is probably not the best approach.
You should look into classification analysis, possibly K nearest neighbors methods, or a Bayesian approach (using either the distribution that you believe fits the data, or a smoothed approximation like a logspline estimate).