How Data Mining Works: A Guide | Tableau

Data mining is the process of understanding data through cleaning raw data, finding patterns, creating models, and testing those models. It includes statistics, machine learning, and database systems. Data mining often includes multiple data projects, so its easy to confuse it with analytics, data governance, and other data processes. This guide will define data mining, share its benefits and challenges, and review how data mining works. Data mining has a long history. It emerged with computing in the 1960s through the 1980s. Historically, data mining was an intensive manual coding process and it still involves coding ability and knowledgeable specialists to clean, process, and interpret data mining results today. Data specialists need statistical knowledge and some programming language knowledge to complete data mining techniques accurately. For instance, here are some examples of how companies have used R to answer their data questions. However, some of the manual processes are now able to be automated with repeatable flows, machine learning (ML), and artificial intelligence (AI) systems.

As discussed, data mining may be confused with other data projects. The data mining process includes projects such as data cleaning and exploratory analysis, but it is not just those practices. Data mining specialists clean and prepare the data, create models, test those models against hypotheses, and publish those models for analytics or business intelligence projects. In other words, analytics and data cleaning are parts of data mining, but they are only parts of the whole.

Data mining is most effective when deployed strategically to serve a business goal, answer business or research questions, or be a part of a solution to a problem. Data mining assists with making accurate predictions, recognizing patterns and outliers, and often informs forecasting. Further, data mining helps organizations identify gaps and errors in processes, like bottlenecks in supply chains or improper data entry.

The first step in data mining is almost always data collection. Todays organizations can collect records, logs, website visitors data, application data, sales data, and more every day. Collecting and mapping data is a good first step in understanding the limits of what can be done with and asked of the data in question. The Cross-Industry Standard Process for Data Mining (CRISP-DM) is an excellent guideline for starting the data mining process. This standard was created decades ago and is still a popular paradigm for organizations that are just starting.

The CRISP-DM comprises a six-phase workflow. It was designed to be flexible; data teams are allowed and encouraged to move back to a previous stage if needed. The model also provides opportunities for software platforms that help perform or augment some of these tasks.

Comprehensive data mining projects start by first identifying project objectives and scope. The business stakeholders will ask a question or state a problem that data mining can answer or solve.

Once the business problem is understood, it is time to collect the data relevant to the question and get a feel for the data set. This data often comes from multiple sources, including structured data and unstructured data. This stage may include some exploratory analysis to uncover some preliminary patterns. At the end of this phase, the data mining team has selected the subset of data for analysis and modeling.

This phase begins with more intensive work. Data preparation involves preparing the final data set, which includes all the relevant data needed to answer the business question. Stakeholders will identify the dimensions and variables to explore and prepare the final data set for model creation.

In this phase, youll select the appropriate modeling techniques for the given data. These techniques can include clustering, predictive models, classification, estimation, or a combination. Front Health used statistical modeling and predictive analytics to decide whether to expand healthcare programs to other populations. You may have to return to the data preparation phase if you select a modeling technique that requires selecting other variables or preparing some different sources.

After creating the models, you need to test them and measure their success at answering the question identified in the first phase. The model may answer facets of things not accounted for, and you may need to edit the model or edit the question. This phase is designed to allow you to look at the progress so far and ensure its on the right track for meeting the business goals. If its not, there might be a need to move backwards to previous steps before a project is ready for the deployment phase.

Finally, once the model is accurate and reliable, it is time to deploy it in the real world. The deployment can take place within the organization, be shared with customers, or be used to generate a report for stakeholders to prove its reliability. The work doesnt end when the last line of code is complete; deployment requires careful thought, a roll-out plan, and a way to make sure the right people are appropriately informed. The data mining team is responsible for the audiences understanding of the project.

Data mining includes multiple techniques for answering the business question or helping solve a problem. This section is just an introduction to two data mining techniques and is not currently comprehensive.

The most common technique is classification. To do this, identify a target variable and then divide that variable into appropriate level of detail categories. For example, the variable occupation level might be split into entry-level, associate, and senior. With other fields such as age and education level, you can train your data model to predict what occupation level a person is more likely to have. You may add an entry for a recent 22-year-old graduate, and the data model could automatically classify that person in an entry-level position. Insurance or financial institutions such as PEMCO Insurance used classification to train their algorithms to flag fraud and to monitor claims.

Clustering is another common technique, grouping records, observations, or cases by similarity. There wont be a target variable like in classification. Instead, clustering just means separating the data set into subgroups. This method can include grouping records of users by geographic area or age group. Typically, clustering the data into subgroups is preparation for analysis. The subgroups become inputs for a different technique.

Read the rest here:

How Data Mining Works: A Guide | Tableau

Related Posts

Comments are closed.