0.2.2 • Published 8 years ago
regexp-stream-tokenizer v0.2.2
regexp-stream-tokenizer
This is a simple regular expression based tokenizer for streams.
IMPORTANT: If you return null
from your function, the stream will end there.
IMPORTANT: Only supports object mode streams.
var tokenizer = require("regexp-stream-tokenizer");
var words = tokenizer(/w+/g);
// Sink receives tokens: 'The', 'quick', 'brown', 'fox', 'jumps', 'over', 'the', 'lazy', 'dog'
words.write('The quick brown fox jumps over the lazy dog');
words.pipe(sink)
// Separators are excluded by default, but can be included
var wordsAndSeparators = tokenizer({ separator: true }, /w+/g);
// Sink receives tokens: 'The', ' ', 'quick', ' ', 'brown', ' ', 'fox', ' ', 'jumps', ' ', 'over', ...
words.write('The quick brown fox jumps over the lazy dog');
words.pipe(sink)
API
require("regexp-stream-tokenizer")([options,] regexp)
Create a stream.Transform
instance with objectMode: true
that will tokenize the input stream using the regexp.
var Tx = require("regexp-stream-tokenizer").ctor([options,] regexp)
Create a reusable stream.Transform
TYPE that can be called via new Tx
or Tx()
to create an instance.
Arguments
options
excludeZBS
(boolean): defaultstrue
.token
(boolean|string|function): defaultstrue
.separator
(boolean|string|function): defaultsfalse
.leaveBehind
(string|Array): optionally provides pseudo-lookbehind support.- all other through2 options.
regexp
(RegExp): The regular expression using which the stream will be tokenized.