Abstract
1. Introduction
2. Related work
3. Architecture of the proposed model
4. Results and analysis
5. Conclusion and future work
Acknowledgment
References
Abstract
This paper provides a comprehensive performance analysis of parametric and non- parametric machine learning classifiers including a deep feed-forward multi-layer perceptron (MLP) network on two variants of improved Concept Vector Space (iCVS) model. In the first variant, a weighting scheme enhanced with the notion of concept importance is used to asses weight of ontology concepts. Concept importance shows how important a concept is in an ontology and it is automatically computed by converting the ontology into a graph and then applying one of the Markov based algorithms. In the second variant of iCVS, concepts provided by the ontology and their semantically related terms are used to construct concept vectors in order to represent the document into a semantic vector space. We conducted various experiments using a variety of machine learning classifiers for three different models of document representation. The first model is a baseline concept vector space (CVS) model that relies on an exact/partial match technique to represent a document into a vector space. The second and third model is an iCVS model that employs an enhanced concept weighting scheme for assessing weights of concepts (variant 1), and the acquisition of terms that are semantically related to concepts of the ontology for semantic document representation (variant 2), respectively. Additionally, a comparison between seven different classifiers is performed for all three models using precision, recall, and F1 score. Results for multiple configurations of deep learning architecture are obtained by varying the number of hidden layers and nodes in each layer, and are compared to those obtained with conventional classifiers. The obtained results show that the classification performance is highly dependent upon the choice of a classifier, and that the Random Forest, Gradient Boosting, and Multilayer Perceptron are among the classifiers that performed rather well for all three models.
Introduction
The global Internet population has reached 3.8 billion in 2017 from 3.4 billion the year before, which is 47% of the world’s population [1]. According to IBM [2], in 2013 the amount of data produced was 2.5 quintillion when the Internet users were around 2.7 billion only. The number is expected to grow in coming years which means that the amount of data produced will be tremendous. By 2020, it is estimated that around 1.7 MB of data will be created every second for every person on earth. The penetration of Internet of Things (IoT) and smart gadgets to households and a huge amount of data produced every minute as a result has created a need for better organization and structuring of the data, which according to [3] is mostly unstructured. Despite the computational resources available nowadays, organizing and structuring tremendous amount of data is not a trivial task and without it, finding and extracting useful information from massive Internet resources is a challenge [4]. Nearly 3.87 million Google searches are conducted every minute of the day by the users [1]. Finding relevant information for every query from plethora of resources is a challenging task. For text-based documents, ontology can play a vital role in this regard [5].