Consequently, we've eliminated RWeka from the list of dependencies and significantly streamlined the installation process. Windows users only need to deal with the Rtools installation now; developer Loren Collingwood left a helpful comment regarding Rtools in a previous post.
When we first wrote RTextTools, we opted to use RWeka for boosting and bagging algorithms for lack of a better alternative. We've discovered that this leads to all sorts of ugly rJava installation issues across platforms and prevents our users from getting started quickly. Recently, we've stumbled upon two excellent non-Java alternatives: LogitBoost in the caTools package, and the bagging implementation in the ipred package.
Consequently, we've eliminated RWeka from the list of dependencies and significantly streamlined the installation process. Windows users only need to deal with the Rtools installation now; developer Loren Collingwood left a helpful comment regarding Rtools in a previous post.
0 Comments
After several weeks trying to find the source of a bug in the maximum entropy library when compiling on Windows, Dirk Eddelbuttel pointed me in the right direction to resolve the issue. Although it required a re-write of the library using the new Rcpp API, maximum entropy now installs on Windows machines when Rtools is installed.
This is significant because now RTextTools has two low-memory algorithms (support vector machines and maximum entropy) working cross-platform. This significantly raises accuracy for Windows users, and simplifies the installation process. In preparation for The 4th Annual Conference of the Comparative Policy Agendas Project in Catania, Sicily, our development team has been busy drafting the documentation for RTextTools. In addition to standard documentation of functions, we want to provide quick-start guides, sample datasets, example scripts, and Amazon EC2 instructions to make it as easy as possible for researchers to get up and running.
A big part of developing the documentation is understanding what methods work best for which datasets. We welcome any feedback you may have regarding what worked for you, and what dataset you were operating on. You can find our contact information on the About the Project page. One key limiting factor for automated text classification is memory consumption. As you accumulate more news articles, bills, and legal opinions, the term-document matrices used to represent the data grow quickly. RTextTools provides two algorithms, support vector machines and maximum entropy, that can handle large datasets with very little memory. Luckily, these two algorithms tend to be the most accurate as well. However, some applications require an ensemble of more than two algorithms to get an accurate scoring of topic codes.
First, you can try reducing the number of terms in your matrix. The create_matrix() function provides many features that can help remove noise from your dataset. There are the defaults- removing stopwords, removing punctuation, making words lowercase, and stripping whitespace- but also some other helpful tools. You can set minimum word length (e.g. minWordLength=5), select the N most frequent terms from each document (e.g. selectFreqTerms=25), setting a minimum word frequency per document (e.g. minDocFreq=3), and remove terms with a sparse factor of less than N (e.g. removeSparseTerms=0.9998). These options can help you reduce the size of your document matrix, but they also can remove some information that may be valuable for the learning algorithms. If you just need the resources to run a huge dataset, and nothing above helps, you should look into setting up an Amazon EC2 instance with RStudio installed. We plan to create a simple way of doing this in the near future, but you'll have to brave the stormy waters for now. Be warned, this option is for experienced users only! Right now our development team is busy preparing a conference release of RTextTools for The 4th Annual Conference of the Comparative Policy Agendas Project at the University of Catania in Sicily. One of the key issues we've had thus far is memory consumption with very large datasets.
In the past week we've pushed out a slew of updates that allow the support vector machine and maximum entropy algorithms to run with low memory requirements, even on very large datasets. Unfortunately, not all the algorithms used in RTextTools support the changes we've made, so this leaves us with a two algorithm ensemble for low-memory classification. However, SVM and maxent tend to be the most accurate algorithms in our tests, meaning that a large ensemble isn't necessary to get high consensus accuracy. The development team has spent the past six months creating the best possible experience for RTextTools users. A few months into development, we heard about a new IDE called RStudio, which has one of the cleanest interfaces to R we've seen. It integrates many R tools (graphing, file management, workspace management, tabbed source editor, and more) into a single, customizable interface.
Most of the development for RTextTools happens right in RStudio, as does lots of the user testing. We've found it not only runs more smoothly, but also lets us easily import and view the datasets we'll be working with. And the best part? It's free, open-source, and cross-platform. We think it's important to let researchers see what's running under the hood of RTextTools, and make modifications to suit their needs. Today we pushed out the first iteration of the source code to our Google Code repository. Feel free to browse the code, check out a copy, and make modifications. We also welcome any additions or bug fixes you may have for us!
|
Developer BlogUpdates on RTextTools progress, tips, and examples. By Author
All
By Date
February 2012
|