A Tools-Based Approach
To
Teaching Data Mining Methods
Musa Jafar
mjafar@mail.wtamu.edu
CIS Department, West Texas A&M University
Canyon, TX 79018
Russell Anderson
russellkanderson@gmail.com
Abstract
In this paper, we describe how we used Microsoft Excel’s data mining add-ins and cloud computing components to teach our senior data mining class. The tools were part of a larger set of tools that we used as part of SQL Server Business Intelligence Development Studio. We also demonstrate the ease of use of these tools to teach a course in data-mining methods with focus on elementary data analysis, data mining algorithms and the usage of the algorithms to analyze data in support of decision-making and business intelligence. The tools allow faculty to focus on the analytical aspects of the algorithms, data mining analysis and practical hands-on homework assignments and projects. The tools allow students to gain conceptual understanding of data mining, hands-on practical experience in data mining algorithms using and analysis of data using data mining tools for the purpose of decision support without having to write large amounts of code to implement the algorithms. We also demonstrate that without such tools, it would have been impossible for a faculty to provide a comprehensive coverage of the topic in a first course in data mining methods. The availability of such tools transform the role of a student from a programmer of data mining algorithms to a business intelligence analyst who understands the algorithms and uses a set of tools that implement these algorithms to analyze data for the purpose of decision support.
Keywords: Data mining, Decision Support, Business Intelligence, Excel Data mining Add-ins, Cloud Computing.
INTRODUCTION
Computer Science and Information Systems programs have been aggressively introducing data mining methods courses into their curriculum, (Lenox & Cuff, 2002; Goharian, Grossman, & Raju, 2004; Saquer, 2007; Jafar, Anderson, & Abdullat, 2008), outlined course content in data mining that are consistent with (Lenox & Cuff, 2002). Computer Science programs have been focusing on the “deep understanding of data mining, instead of simply using tools” (Goharian, Grossman, & Raju, 2004; Musicant, 2006; Rahal, 2008). They focus on the algorithmic aspect of data mining and the efficient implementation of the algorithms. They require advanced programming and data structures knowledge as prerequisites (Musicant, 2006; Rahal, 2008). However, for Bachelor of Business Administration (BBA) in Information Systems, we see the data analysis and the business intelligence aspects of data mining as the focus. Students learn the theoretical concepts and use data sets and tools that implement data mining algorithms to analyze data. We require a first programming course, a database management course, and a statistical data analysis course as prerequisites. Accordingly, the deep understanding of the algorithms, their implementation and the efficiency of implementation is more appropriate for a computer science program. For BBA students, a data centric, algorithm understanding and process-automation approach to data mining similar to Campos, Stengard, & Milenova (2005) is more appropriate.
For the tools part, we chose Microsoft Excel with its data mining add-in(s) as the front-end and cloud computing and SQL Server 2008 as the back-end. Microsoft Excel is pervasive, it is available on almost every college desktop, with its data presentation capabilities, charting, functions and macro support; it is a natural front-end environment for data analysis. It provided us with a self-contained computing environment with back-end support (add-ins, cloud computing and SQL Server Connectivity). Others (Tang, 2008) have chosen XMLMiner with spreadsheet support for pre-and post data mining analysis. Oracle Corporation and IBM provide similar capabilities also. However, Microsoft Excel with its data mining add-ins and support is more pervasive on academic and individual desktops (No additional installation is required).
In the early 2000, data mining tools and technologies were hard to learn, acquire and teach. In the past three years however, these technologies became available for universities through the combination of commercial and open source academic initiatives at a minimal cost (Jafar, Anderson, & Abdullat, 2008) outlines various commercial initiatives and their coverage. In summary, data mining theory has streamlined, its computing technology matured and the tools are available for free or at a minimal cost for academic programs. Information Systems programs could use an approach similar to the database design course approach for teaching data mining courses. In a database design course, we teach students the theory of database design which may include relational algebra, relational calculus, and includes entity-relationship modeling, normalization, transaction management, relational model and indexing. We use tools such as Visio enterprise, IBM Rational, ER-Win or MySQL Workbench for modeling, and a relational engine such as Oracle, SQL Server, IBM-DB2 or MySQL Database Management System for the hands-on component.
BACKGROUND
Data mining for the purpose of decision support is not the process of defining, designing and developing efficient algorithms and their implementations. It is the process of (1) consolidating large data sets into a minable data set, (2) using the mineable data set to train model building algorithms to generate analysis and prediction mining models, (3) validating the capabilities of the mining models, and then (4) using the mining models for the purpose of decision support. In summary, data mining is the process of discovering useful and previously unknown information and relationships in large data sets (Campos, Stengard, & Milenova, 2005; Tan, Steinbach, & Kumar, 2006).
Usually a data set is divided into a training data set and a testing data (holdout) set. The training data set is used to build the mining structure and associated mining models. The testing data set is used to test the accuracy of the mining models. If a model is valid and its accuracy is acceptable, it is then used for prediction. Figure 1 of the appendix is a visualization of the data mining process. It is worth mentioning that in the past, our students had to write their own macro(s) to randomly split the data into training and test data sets. However with the Excel data mining add-ins, the data mining wizards allow students to perform this task automatically and through configuration. That allows us both faculty and students to focus on the data analysis task instead of writing random number generator(s) macros to perform the splitting task (it saved us a homework assignment).
In this paper we will take a hands-on approach to data mining with examples. Each section will be composed of the theory component of data mining followed by the practice component. For the practice components, we will use the Iris data set, the Mushrooms data set and the Bikebuyers data set. The Iris and the mushrooms data sets are public domain data sets available from the UCI repository (University of California Irvine, 2009), we will use these two data sets for elementary data analysis, classification and clustering analysis. The Bikebuyers data set is available from Microsoft Corporation in support of their Business Intelligence set of tools. We will use this data set for market basket analysis (association analysis). The Iris data set attributes are quantitative; the mushroom data set attributes are qualitative.
The Iris data set is composed of 150 records of:
Iris(sepal-length, sepal-width, petal-length, petal-width, iris-type)
for a total of 4 attributes for each Iris type. The length and width attributes are in centimeters. The classification (Iris-type) are Setosa, Versicolor, or Virginica.
The Mushrooms data set is composed of 8,124 records of:
Mushroom(capShape, capSurface,…, odor, ringType, habitat, gillSize, ….., classification)
for a total of 21 attributes in each record. All the attributes are qualitative and the classification of a mushroom is either Poisonous or Edible.
The Bikebuyers data set is composed of 121,300 records of:
BikeBuyer(SalesOrder,Quantity, ProductName, Model, Subcategory,Category).
For the rest of this paper we cover the basic topics covered in a standard data-mining course. The standard topics are elementary data analysis and outlier detection, association rules (market basket analysis), classification algorithms and cluster analysis. The topics, subtopics and the terminology used are found in standard data mining textbooks. The textbooks of (Han & Kamber, 2006; Tan, Steinbach, & Kumar, 2006) are standard for a data mining methods course. In the next section, we explore elementary data analysis, followed by association analysis then classification and then cluster analysis. The last section is a summary and conclusions section. All the figures and tables are in the appendix.
ELEMENTARY DATA ANALYSIS
The Theory
The data mining process has been defined as an Extract, Transform and Load (ETL) process. Elementary data analysis is a step that precedes (ETL). It is the first basic step in data mining. It allows data miners to understand the intricacies of the data set. The data miner needs to have domain knowledge of the dataset, knowledge of the characteristics of each attribute and where possible dependencies between attributes. Data miners need to be able to perform elementary data analysis on the data set under consideration. They should be able to:
Classify the data type of each attribute (quantitative, qualitative, continuous, discrete, or binary) and its scale of measure (nominal, ordinal, interval or ratio).
Produce summary statistics for each quantitative attribute (mean, median, mode, min, max, quartiles).
Visualize the data through histograms, scatter plots, quartile plots and box plots,
Produce hierarchical data analyses through pivot tables and pivot charts.
Produce and analyze the various correlation matrices and key influencers of the attributes.
Finally, in preparation for data mining, the data may need to be relabeled, grouped or normalized.
The Practice
The hands-on practice of elementary data analysis is performed in Excel. Excel is a natural fit for elementary data analysis, with its charting, sorting, table and pivot table capability most of the elementary data analysis tasks can be easily performed from within Excel (Tang, 2008). Excel and Excel tools are heavily used in analyzing data for the purpose of decision support. The data analysis add-in tools allow students to generate descriptive statistics, produce correlation matrices, histograms, percentiles and scatter plots. Using wizards, within minutes, a student can produce summary statistics similar to those in Figure 2, Figure 3 and Figure 4 of the appendix.
The table, pivot table and charting tools allow students to perform various hierarchical analyses and relabeling of the data. Figure 3 is a sample of pivot tables and pivot table charts that can be produced easily from within Excel. From the pivot charts accompanying the pivot tables, it is easy to see that all petal lengths and sepal lengths of the Setosa(s) are small; the Virginica(s) dominate the high end of the petal length and sepal lengths. Filters can also be added on top of the row and column contents to produce hierarchical representation of the data for the purpose of elementary data analysis.
Using the data mining add-ins, we can analyze the overall key influencers of the Iris classification with the relative impact of each attribute value set (Figure 4). We can also perform pair-wise comparisons between the different classifications. The key Influencers tool automatically break the range of a continuous attribute into intervals while determining the key influencers of the Iris type. Based on an analytical model, the algorithm decided that PetalWidth < 0.4125 strongly favors the Setosa classification, a petalwidth in the range of [0.4125, 1.33] strongly favors a Versicolor classification and a petallength >=5.48 strongly favors a Virginical classification. A student can use this visual analysis and presentation of the key influencers to build expert system rules for a classification decision support system. The key influencers tool allows students to perform pair wise discrimination for key influencers of the different types of the classifications. The length of the bar charts to the right indicates the relative importance of each attribute range.
The data exploration tools allow users to interactively produce histograms and configure bucket counts. The data clean-up tools allow users to interactively produce line charts and specify ranges for outliers of numeric data. The data-sampling tools allow users to interactively divide the dataset into multiple random samples. The re-labeling tool allows users to interactively re-label data into ranges such as low, medium and high.
ASSOCIATION RULES (MARKET BASKET ANALYSIS)
Market basket analysis allows a retailer to understand the purchasing behavior of customers and predict products that customers may purchase together. It allows retailers to bundle products, offer promotions on products or suggest products that have not yet been added to the basket. Market basket analysis can also be used to analyze browsing behavior of students inside a course management system by modeling each visit as a market basket and the click stream of a student as a set of items inside a basket.
The theory
Through book chapters, lecture notes and lectures, students learn theoretical foundations and concepts of association analysis. Students learn conditional probability concepts and Baysian statistics. They learn the concepts of item sets, item set support and its calculation, frequent item sets, closed frequent item sets, association rules, rule support and its calculations, rule confidence and its calculations, rule strength and its calculations, rule importance and its calculations, correlation analysis and lift of association rules and their calculations, a priori and general algorithms for generating frequent item sets from a market basket set, a priori and general algorithms for generating association rules from frequent item sets. Faculty may also design exam questions and homework problem solving assignments to emphasize these concepts. For example, given a small market basket and a set of thresholds, students should be able to manually use association analysis algorithms (a priori and confidence-based pruning) to generate the pruned item set, detect closed item sets, generate rules and calculate the support, confidence and importance of rules as shown in the activity diagram in Figure 5. The two books (Han & Kamber, 2006; Tan, Steinbach, & Kumar, 2006) that we have used for the past three years have a tendency to write algorithms in complex English-like structures with a lot of mathematical notations. It is helpful when faculty visualize an algorithm by flowcharting it as an activity diagram then use an example to demonstrate the algorithm in action. Figure 5 is an activity diagram of the a priori algorithm for discovering frequent itemsets (size two or more) and Figure 6 is an example implementation of the algorithm as it applies to an item-set of purchases. In Figure 6 we start with 6 transactions, then we use the a priori algorithm to generate all the frequent item-sets with minimum threshold support of 2. We stop when no item-sets with the minimum threshold support can be generated.
The Practice
The Bikebuyers data set has 31,450 sales orders with 121,300 recorded items for 266 different products that spans across 35 unique categories and 107 different models. Each record has a sales Order that describes the details of the items sold (Product Name, Quantity, Model Name, Subcategory Name and Category Name). Figure 7 is a sample of the data set, According to this sample, Sales Order 43659 has 12 different products as follows: One Mountain-100 Black, 42, 3 Mountain-100 Black, 44, …, and 4 Sport-100 Helmet Blue.
After learning the theory, students use this large data set to perform market basket analysis and focus their time and effort on the analysis task and the decision support aspects of it. For the task at hand students use wizards to build an association analysis mining structure based on the Model Names of the items sold. With the data mining add-ins to Excel, and using wizards to configure the parameters, thresholds, and probabilities for item sets and association rules for the market basket analysis, students can run multiple association analysis scenarios and analyze the item sets and their association rules. Figure 8 is the output of the market basket analysis. It is composed of three tab-groups: (1) the association rules that were predicted, (2) the frequent item sets that were computed (Figure 11) and (3) the dependency network between the items. The figure shows the output after executing the calibrated association analysis algorithm. The algorithm concluded that those who bought all-purpose bike stand and an HL Road Tire also bought a tire Tube with a probability of 0.941 and an importance of 1.080. i.e. the association rule is:
All-Purpose Bike Stand & HL Road Tire --> Road Tire Tube (0.941, 1.08). For a rule such as A-->B, the importance is measured by calculating the log?(prob(B/A))/(Prob(B/~A)) .
Figure 9 is an Excel export of Figure 8 (the item sets tab) for further analysis. The user interface allows students to select a rule and drill through it to the record cases associated with that rule, Figure 10 is a drill through of the top rule of Figure 8. Figure 11 is an Excel export of the item-sets tab of Figure 8. It can be seen that the bar charts from the data-mining add-ins are exported as conditional formatting data bars in Excel. Students can also explore the item sets generated and the strength of dependencies between items. Students then store their results into work sheets for further data analysis.
CLASSIFICATION ANALYSIS AND PREDICTION
This is the most elaborate part of a course in data mining. Generally speaking, “classification is the task of assigning objects to one of several predefined categories”. Formally, “classification is the task of learning a target function that maps each attribute set X to one of the predefined class labels Y” (Tan, Steinbach, & Kumar, 2006). In an introductory course in data mining, students usually learn decision trees, Naïve Bayes, Neural Networks and Logistic regressions models.
The Classification process is a four step process: (1) Select the classification algorithm and specify its parameters. (2) Feed a training data set to the algorithm to learn a classification model. (3) Feed a test data set to the learned model to measure its accuracy. (4) Use the learned model to predict previously unknown classes. In most cases, the mining model fitting is an iterative process; algorithm parameters are calibrated and fine-tuned during the process of finding a satisfactory mining model. Through confusion matrices and lift charts, students usually compare the performance of various mining algorithms and models to select an appropriate algorithm and an associated model.
The Theory
Through book chapters, lecture notes and lectures, students learn the theoretical foundation and concepts of classification analysis. Usually, classification analysis is divided into four areas:
Decision tree algorithms where students learn information gain concepts, best entry selection for a tree-split, entropy, Gini index and classification error measures.
Naïve Bayes algorithms where students learn conditional, prior and posterior probability, independence and correlation between attributes.
Neural Networks algorithms where students learn the “simple” concepts of back propagation, nodes and layers (the details of how neural networks work and the theory behind it is beyond the scope of such a course).
Logistic regression where students learn the difference between standard linear regression and logistic regression where the classification is qualitative usually (Binomial distribution) and the algorithm to predict the probability of one of these values (not to predict a value on the continuum of the corresponding numerical outcomes).
The Practice
For the practice, we use the Mushrooms data set. It is a public domain data set from the UCI database. The data set is used to predict whether a mushroom is poisonous or edible. Figure 12 is partial sample of the data set.
First students perform elementary data analysis on the data set to (1) find out the characteristics of the attributes and their data ranges, (2) elementary classifications, histograms and groupings using pivot tables and pivot charts. Figure 13 is a pivot chart that details the distribution of the classifications of a mushroom broken down by attribute. With pivot tables and charts, students can build numerous hierarchical histograms to understand the characteristics of the data.
For the purpose of classification analysis we build a mining structure and four mining models (a decision tree model, a Bayes model, a neural network model and a logistic regression model). Then we compare the performance of these models using lift charts and a classification matrix.
The Mining Structure: Students use the wizards of the data mining tools to create and configure a mining structure. This involves the inclusion and exclusion of attributes, the configuration of the characteristics of each attribute (key, data type and content type, split percentage of data into training and testing).
Associated Decision Tree Mining Model: Students configure the parameters support, information gain scoring methods, the type of tree split, etc. then a decision tree-mining model with drill through, legends and display capabilities for each node is generated, Figure 14 is an example of a decision tree, students can drill through each node to the underlying data set that supports that node. For example, the model classified the deep bottom branch of the tree as follows:
If odor = 'none' &
sporePrintColor = 'white' &
ringNumber not = 2 &
stalkSurfaceBelowRing not = 'scaly'
Then Prob(mushroom is edible) = 85.6 &
Prob(mushroom is poisonous) = 14.4
Associated Naïve Bayes, Logistic Regression and Neural Network Models: Similarly, students configure parameters and use wizards to build a naïve Bayes, logistic regression, and a neural network classification models. The models display discrimination tables that show each attribute value, the classification it favors and a bar chart as a measure of support. Figure 15 is the naïve Bayes output for the same mining structure as in the decision tree. It displays the attributes, their values and the level of contribution of that value to the favored classification. Similarly Figure 16 is the logistic regression model output and Figure 17 is the neural network model output.
Associated Model Validation: In this section, we demonstrate the simplicity of performing model validation using the four classification algorithms. Students use wizards to build the accuracy charts (Figure 18) for each of the four models. The straight-line from the origin (0, 0) to (100%, 100%) is the random predictor. The broken line from (0, 0) to (49%, 100%) to (100%, 100%) is the ideal model predictor, it will correctly predict every classification. From the graph, the ideal line implies that 49% of the mushrooms in the testing data set are poisonous. The other curves are the decision tree (close to the ideal line), the neural network, the logistic regression and the naïve Bayes model predictors.
Analyzing the chart; the decision tree outperforms the rest of the models; the Naïve Bayes model performs worse than the other three models. Students then use their analytical skills to compare the models relative to the ideal model and random model.
CLUSTER ANALYSIS AND CATEGORY DETECTION
The theory
Finally, students learn how to perform cluster analysis of data which is a form of unsupervised classification. Through chapters, lectures, class presentations and “paper and pencil” homework assignments, students learn the concept of distance and weighted distance between object. Students learn the concept of similarity measures and weighted similarity measures between data types (nominal, ordinal, interval and ratio) and data entities. Measures such as the various kth norms (k= 1, 2 and ?), simple matching coefficient, cosine, Jacard and correlation are learned. Students learn various center-based clustering algorithms such as the K-means, Bisection K-means. Students also learn density-based clustering algorithms such as DBSCAN.
The Practice
For the practice we use the Iris and the Mushrooms data set. The Iris set provides an all numeric ratio scales measure. The Mushrooms data set provides an all qualitative categorical scales measure. Using wizards, students configure the attributes of interest the maximum number of clusters, the split methods, clustering algorithm to use, minimum cluster size, etc. Figure 19 is the output of a clustering run, the characteristics of each of category are displayed.
Students also learn how to perform classification through hierarchical clustering of data. For example, from Figure 20, students can see that category two and three are well clustered around the Setosa and the Versicolor classifications. However, category one has a mix of Versicolor (14 records) and Viriginica (50 records). Students then learn to filter Category one out and perform more clustering on the records of this category to extract clear separation criteria between the clusters. Since clustering is a non-supervised learning process, category detection is not applicable. Since we know that the iris dataset has three distinct categories, we use pivot tables to demonstrate the accuracy of the algorithm. Category one can be clustered again to refine it.
Similarly, we performed (auto detect) hierarchical clustering analysis against the mushrooms data set. Nine categories or clusters w detected. Mapping the clusters against the poisonous and edible classifications, categories 1, 2, 3, and 5 produces a perfect fit. Figure 21shows the classification matrix of the first cluster analysis iteration.
The following histogram shows the characteristics of each cluster, the longer the bar the stronger the influence of the corresponding attribute value. Keep in mind that categories 1 and 3 produce poisonous classifications and categories 2 and 5 produce edible classifications. The records of these categories (1, 2, 3 and 5) are filtered out and another classification is performed. Two iterations later a perfect match is produced. Accordingly, students are able to learn to perform hierarchical iterative clustering on the data.
SUMMARY AND CONCLUSION
Data mining and data analysis for the purpose of decision support is a fast growing area of computing. In the early 2000(s), a data mining methods course was taught as a pure research topics in computer science. However, with the maturity of the discipline, the convergence of algorithms and the availability of computing platforms, students can learn data mining methods as a problem solving discipline that strengthens their analytical skills. The theory has matured, standard textbooks are published and the accompanying technology implements the same basic algorithms. Given the academic initiatives of companies like Oracle, IBM and Microsoft, Information Systems programs are capable of providing a computing platform in support of data mining methods courses. We strongly recommend extending the Information Systems curriculum to include a data mining track for up to three courses.
(Walstrom, Schmbach, & Crampton, 2008), provided an in-depth survey of 300 students enrolled in an introductory business course justifying their reasons for not choosing Information Systems as an area of specialization. We do see that a track in data mining methods, potentially enhances the career opportunities of Information Systems students. Iit is a sustainable growth area that is natural to a BBA in Information Systems program. BBA in Information Systems students should be able to represent, consolidate and analyze data using data mining tools to provide organizations with business intelligence for the purpose of decision support.
Finally, what we presented is not a course about Excel add-ins. It is as much of a course about Excel as a database course is about Oracle, SQL Server, DB2 or MySQL database management systems, or a business statistics course is about SAS, SPSS or R. If it was not for the underlying technologies that we used, it would have been impossible to cover such material in a one-semester course and provide students with the much needed hands-on experience in data mining. It is neither the intension of this paper nor that of the course is to teach hands-on excel. We teach the theory of data mining and the underlying algorithms.
REFERENCES
Campos, M. M., Stengard, P. J., & Milenova, B. L. (2005). Data-Centric Automated Data Mining. International Conference on Machine Learning and Application. IEEE Computer Society.
Goharian, N., Grossman, D., & Raju, N. (2004). Extending the Undergraduate Computer Science Curriculum to Include Data Mining. International Conference on Information Technology: Coding and Computing, 2.
Han, J., & Kamber, M. (2006). Data Mining Concepts and Techniques. Elsevier Inc.
Jafar, M. J., Anderson, R. R., & Abdullat, A. A. (2008). Data Mining Methods Course for Computer Information Systems Students. Information Systems Education Journal , 6(48).
Jafar, M. J., Anderson, R. R., & Abdullat, A. A. (2008). Software Academic Initiatives: A Framework for supporting a Contemporary Information Systems Academic Curriculum. Information Systems Education Journal , 6(55).
Lenox, T. L., & Cuff, C. (2002). Data Mining Methods Course for Computer Information Systems Students. Information Systems Education Conference.
Letsche. (2007). Service Learning Outcomes in an Undergraduate Data Mining Course. Midwest Instruction and Computing Symbosium.
Musicant, D. R. (2006). A data Mining course for computer science: primary sources and implementations. 37th SIGCSE technical symposium on Computer science education. ACM.
Rahal, I. (2008). Undergraduate research experiences in data mining. 39th SIGCSE technical symposium on Computer science. ACM.
Saquer, J. (2007). A data minign course for computer science and non-computer science students. Journal of Computing Sciences in Colleges , 22(4).
Tan, Steinbach, & Kumar. (2006). Introduction to Data Mining. Peasrson Education Inc.
Tang, H. (2008). A Simple Approach of Data Mining in Excel. 4th International Conference on Wireless Communication, Networking and Mobile Computing, (pp. 1-4). Dalian.
University of California Irvine. (2009). UCI Machine Learning Repository. Retrieved from http://archive.ics.uci.edu/ml/.
Walstrom, K. A., Schmbach, T. P., & Crampton, W. J. (2008). Why Are Students Not Majoring in Information Systems? Journal of Information Systems Education , 19(1), 43-52.
APPENDIX
Figure 1 High level flow of data mining activities
Figure 2 Descriptive Statistics and Correlation Matrix Results
Figure 3 A Screenshot of Pivot Tables and Charts Analysis To
Figure 4 Key Influencers Analysis Results
Figure 5 Activity diagram of the Apriori Item set generation Algorithm
Figure 6 Apriori Algorithm implementation example
Figure 7 A Sample of the Data Set
Figure 8 A Screenshot of the Analysis
Figure 9 An Excel Export of the Association Rules
Figure 10 A drill through the top association rule showing 36 out of 100 cases
Figure 11 A Sample generated List of Item Sets with their support and size
Figure 12 Sample records of the Mushroom data set
Figure 13 Sample Attribute Profiles of the Data Set
Figure 14 A Decision Tree Classification of the Mushroom Data Set
Figure 15 Naïve Bayes: Attribute Discrimination of Class
Figure 16 Logistic Regression: Attribute Discrimination of class
Figure 17 Neural Network: Attribute discrimination of Class
Figure 18 Accuracy Chart of 4 the Classification Models for the Poisonous class
Figure 19 Category Characteristics of Iris Data
Figure 20Accuracy Clustering Matrix
Figure 21 Mapping detected categories to classifications
Figure 22 Characteristics of the 3 clusters that produced perfect match