As far as I know, and I've researched this issue deeply in the past, there are no predictive modeling techniques (beside trees, XgBoost, etc.) that are designed to handle both types of input at the same time without simply transforming the type of the features.
Note that algorithms like Random Forest and XGBoost accept an input of mixed features, but they apply some logic to handle them during split of a node.
Make sure you understand the logic "under the hood" and that you're OK with whatever is happening in the black-box.
Yet, distance/kernel based models (e.g., K-NN, NN regression, support vector machines) can be used to handle mixed type feature space by defining a “special” distance function. Such that, for every feature, applies an appropriate distance metric (e.g., for a numeric feature we’ll calculate the Euclidean distance of 2 numbers while for a categorical feature we’ll simple calculate the overlap distance of 2 string values).
So, the distance/similarity between user $u_1$ and $u_2$ in feature $f_i$, as follows:
$d(u_1,u_2 )_{f_i}=(dis-categorical(u_1,u_2 )_{f_i} $ if feature $f_i$ is categorical,
$d(u_1,u_2 )_{f_i}=dis-numeric(u_1,u_2 )_{f_i} $ if feature $f_i$ is numerical. and 1 if feature $f_i$ is not defined in $u_1$ or $u_2$.
Some known distance function for categorical features:
Best Answer
ID3 is an algorithm for building a decision tree classifier based on maximizing information gain at each level of splitting across all available attributes. It's a precursor to the C4.5 algorithm.
With this data, the task is to correctly classify each instance as either benign or malignant. Since each attribute takes on whole integer values in the range 1-10, strictly speaking the values aren't continuous in that they can't take decimal values. For each integer value of each attribute, you'll need to calculate which split provides the most homogenous grouping of instances at each level of splitting. This is done by calculating the information gain for each possible split and selecting the greatest (ID3 is known as a greedy algorithm).
You can do this by hand, but it's obviously better to run the algorithm in a tool such as Weka or R. If you're creating your own implementation, then you'll need to test each possible split and select the one with the greatest information gain, assuming you don't already have a homogenous group (in which case you'd assign the class attribute and change the node to a leaf).