A Naive Bayes model is assessing the following sentence: Get meds from Canadian pharmacies discreetly shipped to your door no prescription needed! Which of the following statements is true?
| a. |
The model will be rendered ineffective by the word Canadian. |
|
| b. |
The model will assess each of the words in the sentence independently (i.e. regardless of what other words are in the sentence). |
|
| c. |
The model will look for suspicious multiple-word strings, such as discreetly shipped and no prescription . |
|
| d. |
The model will make a classification estimate for this sentence as either spam or ham without factoring in the overall percentage of e-mails in the training set that are either spam or ham. |
| a. |
Predictors that are highly correlated with the outcome variable and with other predictors. |
|
| b. |
Predictors with no clear correlative relationship with either the outcome variable or with other predictors. |
|
| c. |
Predictors that are highly correlated with the outcome variable, but not with other predictors. |
|
| d. |
Predictors that are negatively correlated with the outcome variable and positively correlated with other predictors. |
| a. |
9.271 |
|
| b. |
3.794 |
|
| c. |
-3.631 |
|
| d. |
4.158 |
| a. |
GOLF and HOTEL |
|
| b. |
INDIA and JULIET. |
|
| c. |
HOTEL and INDIA. |
|
| d. |
KILO and JULIET. |
| a. |
0.398 |
|
| b. |
0.781 |
|
| c. |
0.602 |
|
| d. |
0.219 |
| a. |
You can bin those values into different groups, and then treat each group as a factor. |
|
| b. |
You need to rebuild the model, with more emphasis placed on the variables. |
|
| c. |
The best thing to do here is to reframe the data as a table. |
|
| d. |
Nothing can be done here — you need to use a different algorithm. |
| a. |
21 |
|
| b. |
28 |
|
| c. |
42 |
|
| d. |
7 |
、The Music Genome Project is based on what type of model?
| a. |
Association rules. |
|
| b. |
Collaborative-based filtering. |
|
| c. |
Content-based filtering. |
|
| d. |
Exhaustive search. |
| a. |
Only one pair (if the data has not been normalized first). |
|
| b. |
16 |
|
| c. |
3 |
|
| d. |
4 |
| a. |
1.492 |
|
| b. |
-0.492 |
|
| c. |
0.509 |
|
| d. |
-1.492 |
| a. |
1.492 |
|
| b. |
-0.492 |
|
| c. |
0.509 |
|
| d. |
-1.492 |
| a. |
Tree-based models can handle missing data well (i.e. without degrading the results of the model). |
|
| b. |
Tree-based models are especially good at identifying the relationships among predictors. |
|
| c. |
Tree-based models can handle the presence of outliers well (i.e. without degrading the results of the model). |
|
| d. |
In order to work well as classifiers, tree-based models require large training data sets. |
| a. |
Stepwise regression. |
|
| b. |
Forward selection. |
|
| c. |
Ordinary least squares. |
|
| d. |
Backward elimination. |
TRUE or FALSE: A hamming distance can be negative.


0 comments