@yozora/tokenizer-paragraph v2.3.12
@yozora/tokenizer-paragraph produce Paragraph type nodes. See documentation for details.
Install
npm
npm install --save @yozora/tokenizer-paragraphyarn
yarn add @yozora/tokenizer-paragraph
Usage
@yozora/tokenizer-paragraph has been integrated into @yozora/parser /
@yozora/parser-gfm-ex / @yozora/parser-gfm, so you can use YozoraParser / GfmExParser /
GfmParser directly.
Basic Usage
@yozora/tokenizer-paragraph cannot be used alone, it needs to be registered in YastParser as a plugin-in before it can be used.
import { DefaultParser } from '@yozora/core-parser'
import ParagraphTokenizer from '@yozora/tokenizer-paragraph'
import TextTokenizer from '@yozora/tokenizer-text'
import ParagraphTokenizer from '@yozora/tokenizer-paragraph'
const parser = new DefaultParser()
.useFallbackTokenizer(new ParagraphTokenizer())
.useFallbackTokenizer(new TextTokenizer())
.useTokenizer(new ParagraphTokenizer())
// parse source markdown content
parser.parse(`
aaa
bbb
`)Use within @yozora/parser
import YozoraParser from '@yozora/parser'
const parser = new YozoraParser()
// parse source markdown content
parser.parse(`
aaa
bbb
`)Use with @yozora/parser-gfm
import GfmParser from '@yozora/parser-gfm'
const parser = new GfmParser()
// parse source markdown content
parser.parse(`
aaa
bbb
`)Use within @yozora/parser-gfm-ex
import GfmExParser from '@yozora/parser-gfm-ex'
const parser = new GfmExParser()
// parse source markdown content
parser.parse(`
aaa
bbb
`)Options
| Name | Type | Required | Default |
|---|---|---|---|
name | string | false | "@yozora/tokenizer-paragraph" |
priority | number | false | TokenizerPriority.FALLBACK |
name: The unique name of the tokenizer, used to bind the token it generates, to determine the tokenizer that should be called in each life cycle of the token in the entire matching / parsing phase.priority: Priority of the tokenizer, determine the order of processing, high priority priority execution. interruptable. In addition, in thematch-blockstage, a high-priority tokenizer can interrupt the matching process of a low-priority tokenizer.
Related
9 months ago
10 months ago
10 months ago
12 months ago
12 months ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
1 year ago
2 years ago
2 years ago
2 years ago
3 years ago
3 years ago
3 years ago
3 years ago
3 years ago
3 years ago
3 years ago
3 years ago
3 years ago
3 years ago
3 years ago
3 years ago
3 years ago
3 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
5 years ago
5 years ago
4 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago