Below is a point by point breakdown of the use and interpretation of conjoint analysis. I would suggest that you read carefully through the details to ensure the questions you are asking are relevant for your use case and that they are not
http://www.qualtrics.com/university/researchsuite/research-resources/data-analysis-guides/the-use-and-interpretation-of-conjoint-analysis/
Depending on the size of your sample population, you may want to limit the number of your attributes that respondents can reply to in order to avoid potential issues that can occur based on the size of your attribute matrix.
As for the price elasticity of demand, you will need to take into consideration that this is only representative of the sample you are dealing with and not necessarily an actual market price sensitivity analysis (outlined by IBM here: http://www.ibm.com/developerworks/library/ba-price-sensitivity/).
A fair warning on the implications of assuming you are measuring price elasticity of demand directly can be found in an excerpt of the link below:
"In summary, the common practice of converting differences between attribute
levels to a monetary scale is potentially misleading. The value of product enhancements can be better assessed through competitive market simulations."
http://www.sawtoothsoftware.com/download/techpap/interpca.pdf
These resources should get you off on the right footing.
flexmix would do the job but (so far I remember) only if you model binary (Yes/No) or pairwise (A vs B) choices (Last time I checked the authors were working on an extension to multinomial (MNL) choices)
However, latent class logit (LCL) models are relatively easy to code as they consist in a discrete mixture of standard MNL models (so if you know how to code an MNL model you should be able to write your own LCL code).
Here is an example for a LCL with 2 classes:
X -> Matrix of independent variables (e.g., attributes' levels)
Y -> Column vector of observed choices (0/1)
N -> Column vector of respondents ID (e.g., 1 1 1 1 2 2 2 2 3 3 3 3 ...)
G -> Column vector of observations ID (e.g., 1 1 2 2 3 3 4 4 5 5 6 6 ...)
In this the code, the model specification is quite simple:
- Only 2 latent classes.
- Same set of predictors for the 2 classes (Possible to add some constraints).
- Constant only for class membership (Possible to add some covariates (age, gender, etc)).
loglik.LCL = function(beta, X, Y, N, G){
### Class 1
num1 = exp(as.matrix(X) %*% as.vector(beta[1:ncol(X)]))
den1 = tapply(num1, G, sum)
prb1 = num1[Y==1] / den1
sprb1 = tapply(prb1, N, prod)
### Class 2
num2 = exp(as.matrix(X) %*% as.vector(beta[1+ncol(X):2*ncol(X)]))
den2 = tapply(num2, G, sum)
prb2 = num2[Y==1] / den2
sprb2 = tapply(prb2, N, prod)
### Membership
cla1 = exp(0)
cla2 = exp(beta[1+2*ncol(X)])
CLA = cla1 + cla2
### Log-likelihood
llik = -sum(log(cla1/CLA * sprb1 + cla2/CLA * sprb2))
return(llik)}
Remark: Possible to write more efficient version of this code if you have a complete dataset by replacing tapply() by matrix operations (reshape, colSums, etc).
You can compare your results with the "lclogit" Stata command.
Best Answer
An option to reduce the number of presented profiles is to use 2k Factorial Design to reduce the number of comparisons. In R you can use conf.design library (there are many other 'agricolae', 'AlgDeign'). This is a basic approach, but if you want to go deeper in the topic of the adaptivity ( from ACA ) you can see Fast Polyhedral Conjoint.
You said: "However, in order to do ACA, as far as I know of at this time, one must have the survey hosted by a very expensive (e.g. $10,000) conjoint-oriented survey host." So you can design design your conjoint analysis (using R ) and apply versatile questionnaires with open source tools (e.g Limesurvey.com)