KinoSearch::Analysis::Tokenizer - customizable tokenizing |
KinoSearch::Analysis::Tokenizer - customizable tokenizing
my $whitespace_tokenizer = KinoSearch::Analysis::Tokenizer->new( token_re => qr/\S+/, );
# or... my $word_char_tokenizer = KinoSearch::Analysis::Tokenizer->new( token_re => qr/\w+/, );
# or... my $apostrophising_tokenizer = KinoSearch::Analysis::Tokenizer->new;
# then... once you have a tokenizer, put it into a PolyAnalyzer my $polyanalyzer = KinoSearch::Analysis::PolyAnalyzer->new( analyzers => [ $lc_normalizer, $word_char_tokenizer, $stemmer ], );
Generically, ``tokenizing'' is a process of breaking up a string into an array of ``tokens''.
# before: my $string = "three blind mice";
# after: @tokens = qw( three blind mice );
KinoSearch::Analysis::Tokenizer decides where it should break up the text
based on the value of token_re
.
# before: my $string = "Eats, Shoots and Leaves.";
# tokenized by $whitespace_tokenizer @tokens = qw( Eats, Shoots and Leaves. );
# tokenized by $word_char_tokenizer @tokens = qw( Eats Shoots and Leaves );
# match "O'Henry" as well as "Henry" and "it's" as well as "it" my $token_re = qr/ \b # start with a word boundary \w+ # Match word chars. (?: # Group, but don't capture... '\w+ # ... an apostrophe plus word chars. )? # Matching the apostrophe group is optional. \b # end with a word boundary /xsm; my $tokenizer = KinoSearch::Analysis::Tokenizer->new( token_re => $token_re, # default: what you see above );
Constructor. Takes one hash style parameter.
Copyright 2005-2006 Marvin Humphrey
See KinoSearch version 0.15.
KinoSearch::Analysis::Tokenizer - customizable tokenizing |