I agree with @andrea that it is common to see "matched individuals by some independent variables", but not "matched independent variables".
If individuals are matched, it is repeated measures. Zero correlation can happen but is rare, so the usual logistic regression you used may not be valid. The conditional logistic regression or GEE is robust to handle the correlation within repeated measures.
The difference between conditional logistic regression and GEE is the interpretation, where the former getting the subject specific estimate and the latter the population average estimate.
Your sense that you are limited by the number of cases is correct. The rule of thumb for standard multiple logistic regression is to have no more than 1 predictor variable per 15 cases of the least frequent class. In your case, that would be 30 cases, 2 predictor variables. Even though you might get an apparently good fit with more predictors, such a model would be unlikely to generalize well.
LASSO and other penalized methods like ridge regression let you use more predictors in your model than that. The regression coefficients in penalized models are lower in magnitude than they would be in a standard model for the same variables. This diminishes the "optimism" based on results from a small data set and makes the final model more likely to generalize provided that the penalty is chosen appropriately by, say, cross-validation.
Thus you are able to start with as many predictors as you wish for LASSO, ridge regression, or their hybrid elastic net. LASSO will penalize by selecting a subset of predictors and penalizing the coefficients of those selected. Ridge regression will keep all predictors with penalized coefficients.
There will be at least 2 types of limits from this approach. First, the particular variables selected by LASSO may differ substantially among data samples, even with a large data set, as you can test by repeating your modeling on multiple bootstrapped samples. Second, with so few cases your coefficients will be heavily penalized toward magnitudes of 0. Also, some care needs to be taken with categorical predictors, as discussed on this page.
Finally, coronary artery disease has been extensively studied in many large-scale data sets for many decades. Please think carefully about what you are likely to add to this body of knowledge with such a small data set.
Best Answer
R allows what are called generalized linear mixed effects models. In these, the response variable is allowed to be from a few different families, including binomial (which, if coded as 0 and 1, gives logistic regression).
The function used to be called glmer(). I'm pretty sure that now more recent versions of the regular mixed effects models function lmer() allows you to specify a family (e.g. 'binomial') and a link function (e.g. 'logit'). lmer() allows the specification of random effects and nesting. You can find more info on Doug Bates' slides, in particular the very last one, here . He wrote lmer(), so I believe him when he says it works.
Keep in mind that you need numerous (more than 6 or so) different 'subjects' to be able to estimate random effects efficiently.