Now, we can test our language ID classifier on the data we downloaded from Twitter. This recipe will show you how to run the classifier on the .csv file and will set the stage for the evaluation step in the next recipe.
How to do it...
Applying a classifier to the .csv file is straightforward! Just perform the following steps:
This will use the default CSV file from the data/disney.csv distribution, run over each line of the CSV file, and apply a language ID classifier from models/ 3LangId.LMClassifier to it:
InputText: When all else fails #DisneyBest Classified Language: englishInputText: ES INSUPERABLE DISNEY !! QUIERO VOLVER:(Best Classified Language: Spanish
You can also specify the input as the first argument and the classifier as the second one.
How it works…
We will deserialize a classifier from the externalized model that was described in the previous recipes. Then, we will iterate through each line of the .csv file and call the classify method of the classifier. The code in main() is:
The preceding code builds on the previous recipes with nothing particularly new. Util.readCsvRemoveHeader, shown as follows, just skips the first line of the .csv file before reading from disk and returning the rows that have non-null values and non-empty strings in the TEXT_OFFSET position:
public static List<String[]> readCsvRemoveHeader(File file) throws IOException {
FileInputStream fileIn = new FileInputStream(file);
InputStreamReader inputStreamReader = new InputStreamReader(fileIn,Strings.UTF8);
CSVReader csvReader = new CSVReader(inputStreamReader);
csvReader.readNext(); //skip headers
List<String[]> rows = new ArrayList<String[]>();
String[] row;
while ((row = csvReader.readNext()) != null) {
if (row[TEXT_OFFSET] == null || row[TEXT_OFFSET].equals("")) {
continue;
}
rows.add(row);
}
csvReader.close();
return rows;
}