One key limiting factor for automated text classification is memory consumption. As you accumulate more news articles, bills, and legal opinions, the term-document matrices used to represent the data grow quickly. RTextTools provides two algorithms, support vector machines and maximum entropy, that can handle large datasets with very little memory. Luckily, these two algorithms tend to be the most accurate as well. However, some applications require an ensemble of more than two algorithms to get an accurate scoring of topic codes.
First, you can try reducing the number of terms in your matrix. The create_matrix() function provides many features that can help remove noise from your dataset. There are the defaults- removing stopwords, removing punctuation, making words lowercase, and stripping whitespace- but also some other helpful tools. You can set minimum word length (e.g. minWordLength=5), select the N most frequent terms from each document (e.g. selectFreqTerms=25), setting a minimum word frequency per document (e.g. minDocFreq=3), and remove terms with a sparse factor of less than N (e.g. removeSparseTerms=0.9998).
These options can help you reduce the size of your document matrix, but they also can remove some information that may be valuable for the learning algorithms. If you just need the resources to run a huge dataset, and nothing above helps, you should look into setting up an Amazon EC2 instance with RStudio installed. We plan to create a simple way of doing this in the near future, but you'll have to brave the stormy waters for now. Be warned, this option is for experienced users only!
First, you can try reducing the number of terms in your matrix. The create_matrix() function provides many features that can help remove noise from your dataset. There are the defaults- removing stopwords, removing punctuation, making words lowercase, and stripping whitespace- but also some other helpful tools. You can set minimum word length (e.g. minWordLength=5), select the N most frequent terms from each document (e.g. selectFreqTerms=25), setting a minimum word frequency per document (e.g. minDocFreq=3), and remove terms with a sparse factor of less than N (e.g. removeSparseTerms=0.9998).
These options can help you reduce the size of your document matrix, but they also can remove some information that may be valuable for the learning algorithms. If you just need the resources to run a huge dataset, and nothing above helps, you should look into setting up an Amazon EC2 instance with RStudio installed. We plan to create a simple way of doing this in the near future, but you'll have to brave the stormy waters for now. Be warned, this option is for experienced users only!