I have conducted a clinical trial using two groups with two different treatment modalities. I have measured different physical and biochemical parameters (e.g pulse, systolic blood pressure, serum sodium level etc.) in each group at different time intervals (say pre-intervention, after 1 month and after two months). Now I want to measure the effect of each treatment modality on these parameters over time and at the same time look for any significant difference existing between the two groups. Is this study design suitable for repeated measures ANOVA? How to do it then in SPSS 19? Is there any other way to do the analysis?

# Solved – Repeated measures ANOVA for two different groups

clinical-trialssequential-analysisspss

#### Related Solutions

First off, I would like to suggest learning how to calculate power explicitly instead of using an online calculator. You would do this in a full statistical programming language, like R. Using this method, you can calculate the power for a huge myriad of possible scenarios that aren't covered by the typical calculator (like a 3-arm study). The accepted answer provided here offers an excellent description of how to perform this operation.

However, using an online calculator for a straightforward comparison of proportions seems reasonable. In a power calculation, you need to (typically) assume 3 variables and calculate the fourth. You want to calculate sample size, so you need to assume an alpha level, power, and effect size. Alpha and power are usually set at 0.05 and 0.80, respectively. That just leaves effect size. For proportions, your effect size is the two proportions in the control and supplement arms. For your first comparison, you want a power to detect the difference between 0.45 and 0.025 proportions. Using this calculator you linked, I get a total sample size of 29 (without continuity corrections). To change your assumptions, you can try 0.45 and 0.40, which brings it up to a whopping 3067 (detecting proportion differences near 0.50 is difficult). From here, you can change your assumptions as you see fit, like changing the ratio in each group or the needed power.

You should ideally specify your unadjustet and adjusted model beforehand offcourse. A sound scientific practice requires that the variables for the adjusted model should be decided apriori on based on prior knowledge of variables that have a high correlation with the dependent (outcome variable) and variables should not be cherry-picked from analysing baselinedata. Also beware of the risk of overfitting by throwing too many variables into the model and playing fort and back with putting variables in and leaving them out (lot of litterature on this topic). Regarding your outcome variable however, baseline differences between groups shoul allways be adjusted for and specified in a traditional ANCOVA (e.g. linear regression) (see Altman / Vickers for traditional points on adjusting for dependent variable baseline in the BMJ series on statistic).

However, the topic of how to specify the model and adjust for baseline-differences of the dependent variable between groups is a hot one when it comes to mixed model. (see Twisk 2018 (some errors in this article) and se articles and a freshly published book on analysing randomized trials with mixed model by a japaneese statitistician: Toshiro Tango)

So far I am inclined to follow Tango's suggestions. Thus, specifying the model similar to like this (random intercept):

Yt (ij) = B0 + B1 X + B2 time + B3 time*group + b(ij) + e(ij)

Were Yt is the outcome/dependent variable (walking time in your case). "t" denotes that Y is a function of time (ij) denotes that Y is based on repeated meassurements nested in each individual(i) and time(j). B0 - denotes regression coefficient for the control group - i.e. mean at baseline B1 - denotes the baseline differnce for the treatment group (X specified as 0 for control group and 1 for treatment group) B2 - effect of time for the control group - i.e. post mean value is B0+B2 B3 - the difference in effect of time*group - i.e. the difference between control and treatment - this is the coefficient you normally would use to assess the effect estimate of the treatment compared to control and conclude on wheather to reject H0 (that the there is noe difference between groups). Bij is here the random intercept - basically just assessing the individual variance at baseline - it should by definition be a normal distribution with mean 0. eij is the error term (This model is simplified a bit by leaving out random slope which is a debate of it's own)

This model is only specified with time as pre post, but with repeated meassurements you just add time points and interaction between time and group - e.g. B4 time2 B6 time2*X B7 time 3 ... etc). If you only have baseline and one follow up meassurment then traditonal ACOVA (regression) might be a better choice than mixed model. One of the great advantages of mixed model is the way you can handle missing without imputation etc. as long as you can assume missing at random.

Since I do not use SPPS I cannot help you with the exact syntax for running th model above in SPSS, but I bet others can. Hope it was a bit helpfull even though this topic can be more confusing than one would expect at first.

## Best Answer

You could do a repeated measures ANOVA (over time, e.g. pre/1 mo/2 mo) for each of your DVs separately. I don't have a copy of SPSS at the present time, but this primer looks about right.

If your dependent measures are moderately correlated (rs = .4 to .7), you may want to consider looking into a MANOVA or creating some composite measures. Unfortunately, I do not know much about how to conduct a MANOVA inside SPSS. So, I leave that to better minds.