Naive Bayes Pros: Works with a small amount of data, handles multiple classes Cons: Sensitive to how the input data is prepared Works with: Nominal values
Question :
Select the correct statement which applies to Bayes rule
1. Bayesian probability and Bayes' rule gives us a way to estimate unknown probabilities from known values. 2. You can reduce the need for a lot of data by assuming conditional independence among the features in your data. 3. Access Mostly Uused Products by 50000+ Subscribers 4. Only 1 and 2 5. All 1,2 and 3 are correct
Using probabilities can sometimes be more effective than using hard rules for classification. Bayesian probability and Bayes' rule gives us a way to estimate unknown probabilities from known values. You can reduce the need for a lot of data by assuming conditional independence among the features in your data. The assumption we make is that the probability of one word doesn't depend on any other words in the document. We know this assumption is a little simple. That's why it's known as naive Bayes. Despite its incorrect assumptions, naive Bayes is effective at classification. Bayes' theorem finds the actual probability of an event from the results of your tests. For example, you can: " Correct for measurement errors. If you know the real probabilities and the chance of a false positive and false negative, you can correct for measurement errors. " Relate the actual probability to the measured test probability. Bayes' theorem lets you relate Pr(A|X), the chance that an event A happened given the indicator X, and Pr(X|A), the chance the indicator X happened given that event A occurred. Given mammogram test results and known error rates, you can predict the actual chance of having cancer.
Explanation: : One approach to the design of recommender systems that has seen wide use is collaborative filtering. Collaborative filtering methods are based on collecting and analyzing a large amount of information on users' behaviors, activities or preferences and predicting what users will like based on their similarity to other users. A key advantage of the collaborative filtering approach is that it does not rely on machine analyzable content and therefore it is capable of accurately recommending complex items such as movies without requiring an "understanding" of the item itself. Many algorithms have been used in measuring user similarity or item similarity in recommender systems. For example, the k-nearest neighbor (k-NN) approach and the Pearson Correlation