0.0.0 • Published 1 year ago

@hexatool/fs-find v0.0.0

Weekly downloads
-
License
MIT
Repository
github
Last release
1 year ago

Installation

npm install --save @hexatool/fs-crawl

Using yarn

yarn add @hexatool/fs-crawl

What it does

Crawl your filesystem up or down.

API

crawl(root: string, options: CrawlerOptions): string[]

  • root
    • Type: string.
    • Optional: false.
    • Description: The folder to start crawl.

CrawlerOptions

  • direction
    • Type: string.
    • Optional: false.
    • Allowed values up or down.
    • Description: The direction to crawl.
  • exclude

    • Type: (dirName: string, dirPath: string) => boolean.
    • Optional: true.
    • Applies an exclusion filter to all directories and only crawls those that do not satisfy the condition. Useful for speeding up crawling if you know you can ignore some directories.

      The function receives two parameters: the first is the name of the directory, and the second is the path to it.

      Currently, you can apply only one exclusion filter per crawler. This might change.

  • excludeFiles

    • Type: boolean.
    • Optional: true.
    • Description: Exclude files from the output.
  • filters

    • Type: (path: string, isDirectory: boolean) => boolean.
    • Optional: true.
    • Description: Applies a filter to all directories and files and only adds those that satisfy the filter.

      Multiple filters are joined using AND.

      The function receives two parameters: the first is the path of the item, and the second is a flag that indicates whether the item is a directory or not.

  • includeBasePath

    • Type: boolean.
    • Optional: true.
    • Description: Use this to add the base path to each output path.

      By default, the crawler does not add the base path to the output. For example, if you crawl node_modules, the output will contain only the filenames.

  • includeDirs

    • Type: boolean.
    • Optional: true.
    • Description: Use this to also add the directories to the output.

    For example, if you are crawling node_modules, the output will only contain the files ignoring the directories including node_modules itself.

  • normalizePath

    • Type: boolean.
    • Optional: true.
    • Description: Normalize the given directory path using path.normalize.
  • resolvePaths
    • Type: boolean.
    • Optional: true.
    • Description: Resolve the given directory path using path.pathResolve.
  • resolveSymlinks

    • Type: boolean.
    • Optional: true.
    • Description: Use this to resolve and recurse over all symlinks.

      NOTE: This will affect crawling performance so use only if required.

  • suppressErrors

    • Type: boolean.
    • Optional: true.
    • Description: Use this if you want to handle all errors manually.

      By default, the crawler handles and suppresses all errors including permission, non-existent directory ones.

  • maxDepth
    • Type: number.
    • Optional: true.
    • Default: Infinite
    • Description: Use this to limit the maximum depth the crawler will crawl to before stopping.
  • relativePaths
    • Type: boolean.
    • Optional: true.
    • Description: Use this to get paths relative to the root directory in the output.
  • stopAt

    • Type: string.
    • Optional: true.
    • Description: Use this to specify the folder where the crawler should stop crawl when direction is up.

    By default, the crawler stops crawl when it reaches the root of the filesystem.

Hexatool Code Quality Standards

Publishing this package we are committing ourselves to the following code quality standards:

  • Respect Semantic Versioning: No breaking changes in patch or minor versions
  • No surprises in transitive dependencies: Use the bare minimum dependencies needed to meet the purpose
  • One specific purpose to meet without having to carry a bunch of unnecessary other utilities
  • Tests as documentation and usage examples
  • Well documented README showing how to install and use
  • License favoring Open Source and collaboration