0.9.4 • Published 3 years ago

javadoc-tokenizer v0.9.4

Weekly downloads
6
License
ISC
Repository
github
Last release
3 years ago

javadoc-tokenizer

Build Status Code Coverage ISC License

Tokenize source code documentation created to document non-component functions in StoryBook.js. StoryBook generates component documentations by adding a __docgenInfo property to components. This package can add a __docgenInfo to most functions.

Installation

npm install javadoc-tokenizer

Demo

Download and run npm start.

Goals

  1. Portable way to extract doc blocks into JSON objects. Other documentation tools such as esdoc2 do not have a way to use their tokenizers separately.
  2. Support a wide range of common tags including @property
  3. Normalize type names such as bool => Boolean
  4. Extract default values specified in brackets
  5. Generate simple signature strings such as bytesToText(bytes, precision = 'auto') ⇒ {String}

Recognized tags

TagOutputDescription
@nameStringThe name of the function (if not present, name will be inferred)
@descriptionStringThe name of the function (if not present, top text will be used)
@paramObject[]type, description, default value and properties of an argument
@propertyObject[]type, description and default value of an argument's property
@throwsObject[]type, description and properties of an Error that may be thrown
@examplesObject[]type, text, description and language of a code example
@accessStringMay be public, private, protected or a custom value
@apiStringAlias for @access
@publicStringSame as @access public
@privateStringSame as @access private
@protectedStringSame as @access protected
@chainableBooleanTrue if function is chainable
@deprecatedBooleanTrue if function is deprecated
@versionStringFunction version
@sinceStringVersion of library when function was added
@todoString[]A list of TODOs
@seeString[]A list of text/links for more information
@returnsObjecttype, description and properties of the return value
@ignoreBooleanTrue if function should be omitted from displayed documentation

Note: Other tags will be put into a customTags array.

Usage

Tokenize source code

const fs = require('fs');
const { extract } = require('javadoc-tokenizer');

let src = fs.readFileSync(path, 'utf8');
const functionDocs = extract(src);

Add a __docgenInfo property to all functions possible

const fs = require('fs');
const { getDocgenCode } = require('javadoc-tokenizer');

let src = fs.readFileSync(path, 'utf8');
const docgenCode = getDocgenCode(src);
if (docgenCode) {
	src += '\n\n' + docgenCode;
}

Example Input and Output

Input:

/**
 * Convert numeric bytes to a rounded number with label
 * @example
 * bytesToText(23 * 1024 + 35); // 23.4 KB
 * @param {Number} bytes  The number of bytes
 * @param {Number|String} precision  The decimal precision or "auto"
 * @returns {String}
 */
export default function bytesToText(bytes, precision = 'auto') {
	// ...
}

Output:

[
	{
		access: 'public',
		canAddDocgen: true,
		chainable: null,
		contextCode:
			"export default function bytesToText(bytes, precision = 'auto')",
		customTags: [],
		deprecated: null,
		description: 'Convert numeric bytes to a rounded number with label',
		examples: [
			{
				description: '',
				language: 'js',
				text: 'bytesToText(23 * 1024 + 35); // 23.4 KB',
				type: 'javadoc',
			},
		],
		ignore: false,
		name: 'bytesToText',
		params: [
			{
				default: undefined,
				description: 'The number of bytes',
				name: 'bytes',
				properties: [],
				required: true,
				type: 'Number',
			},
			{
				default: undefined,
				description: 'The decimal precision or "auto"',
				name: 'precision',
				properties: [],
				required: true,
				type: 'Number|String',
			},
		],
		returns: {
			description: '',
			properties: [],
			type: 'String',
		},
		see: [],
		signature: "bytesToText(bytes, precision = 'auto') ⇒ {String}",
		since: null,
		subtype: null,
		throws: [],
		todos: [],
		type: 'function',
		version: null,
	},
];

Limitations

  1. Only tested on JavaScript
  2. Uses regular expressions instead of code tokenization
  3. Tokenizer is not aware of context; e.g. the name of the class they belong to

Unit Tests and Code Coverage

Powered by jest

npm test
npm run coverage

Contributing

Contributions are welcome. Please open a GitHub ticket for bugs or feature requests. Please make a pull request for any fixes or new code you'd like to be incorporated.

License

Open Source under the ISC License.