There are several statistics related to the chi-square that measure association in contingency tables.
For example, there's Cramer's $\phi$ (also $\phi_C$ or Cramer's $V$), which in 2x2 tables is also called the phi coefficient.
For a $r\times c$ table,
$V = \sqrt{ \frac{\chi^2/n}{\min(c - 1,r-1)}}$
In your case, if one of your variables is Y, which you state to be 0-1, then that will reduce to:
$V = \sqrt{ \chi^2/n}$
where $n$ is the total number of observations.
Wikipedia:
Cramer's Phi/Cramer's V
Phi coefficient
There are a number of other ways of measuring association in contingency tables.
Let's take your first goal, which is to test for a difference in the rate of desk vs. non-desk mediums across H vs. non-H categories. If this is a valid rephrasing of your goal, then you can transform your variables accordingly and run a bivariate logistic regression. Your data are probably too sparse to run even an example model (and your code isn't copy-and-pastable), so I can't give you tested and complete syntax, but here's a dry run:
summary(mod <- glm( I(Medium=="Desk") ~ I(Category=="H"), binomial() ))
predict(mod, data.frame(Category=c("H","NotH")), "response")
The significance of the one predictor here will tell you whether the difference in rates of Desk is significant in category H compared to both other categories lumped together. The second line will give you the actual predicted probability of a Desk medium for an H category vs. either of the non-H ones.
If you want to know if this category-H desk rate is different from a specific one of the other category's desk rates (let's say M), I would just run the model on a subset of the data that doesn't include the third category (let's say L). Assuming your dataset is named dat
:
summary(mod <- glm( I(Medium=="Desk") ~ I(Category=="H"), binomial(),
dat, subset= Category!="L"))
I'm addicted to regression (and it sounds like some of your other goals here might call for multinomial logistic models by the way) so I just default to this approach; I think there is a Chi-square solution to at least some of your research questions, especially if you transform the variables first and treat the trues and falses as categories. Proportionality tests, however, are not relevant here, assuming that you're referring to the proportional-odds assumption, which doesn't apply with unordered or binary variables.
Best Answer
You can run loglinear analysis. Check out Andy Field's Disovering Statistics Using R. He has an entire section on this in Chapter 18. A chi square test of independence is an extension/derived from loglinear analysis such that a chi square test tests for a two way interaction between your two categorical variables. If a chi square test is significant, that implies a significant two way interaction between your categorical variables and therefore, are not independent (that's how the chi square test of "independence" gets its name). You can extend loglinear analysis to include three variables so that you can test for a relationship between three categorical variables. You basically start off with a saturated model that includes all of your 3 main effects, 3 two way interactions, and a single 3 way interaction. You then remove the three way interaction from the model and then compare the saturated model to the new model using a likelihood ratio test (basically comparing the deviance of the new model to the deviance of the previous model). If the likelihood ratio test is a significant, then you can say that there is a significant three way interaction between your categorical variables. You can then stratify your data using the levels of one of your categorical variables (which categorical variable you choose depends on what you find is interesting) and follow up with two separate chi square test of independence.
(I just realized this question is from a few year past. Anyways, I hope this offers clarification nevertheless).