Understand Naive Bayes Algorithm | NB Classifier – Towards Data Science

Photo by Google DeepMind on Unsplash

This year, my resolution is to go back to the basics of data science. I work with data every day, but its easy to forget how some of the core algorithms function if youre completing repetitive tasks. Im aiming to do a deep dive into a data algorithm each week here on Towards Data Science. This week, Im going to cover Naive Bayes.

Just to get this out of the way, you can learn how to pronounce Naive Bayes here.

Now that we know how to say it, lets look at what it means

This probabilistic classifier is based on Bayes theorem, which can be summarized as follows:

The conditional probability of an event when a second event has already occurred is the product of event B, given A and the probability of A divided by the probability of event B.

P(A|B) = P(B|A)P(A) / P(B)

A common misconception is that Bayes Theorem and conditional probability are synonymous.

However, there is a distinction Bayes Theorem uses the definition of conditional probability to find what is known as the reverse probability or the inverse probability.

Said another way, the conditional probability is the probability of A given B. Bayes Theorem takes that and finds the probability of B given A.

A notable feature of the Naive Bayes algorithm is its use of sequential events. Put simply, by acquiring additional information later, the initial probability is adjusted. We will call these the prior probability/marginal probability and the posterior probability. The main takeaway is that by knowing another conditions outcome, the initial probability changes.

A good example of this is looking at medical testing. For example, if a patient is dealing with gastrointestinal issues, the doctor might suspect Inflammatory Bowel Disorder (IBD). The initial probability of having this condition is about 1.3%.

Follow this link:

Understand Naive Bayes Algorithm | NB Classifier - Towards Data Science

Related Posts

Comments are closed.