hi,
In our case, there are search fields that contain characters like "-" and "/ ". (For example, there is a field named 'model' which contains values like 'DS/340' )
We have a text field where the user can type in search words for searching against multiple fields, one of which is the 'model' field. We were planning to use StandardAnalyzer for this, but since it splits words at punctuation characters, we will have a problem. Is there any simple way to configure the delimiters to be used for tokenizing or will we need to write a new tokenizer? The StandardTokenizer implementation seems quite complex, hence the concern.
Thanks,
Seema
|