1.2.2 • Published 8 years ago
@kevroadrunner/crawler v1.2.2
Crawler
WIP This project is just a personal challenge and has no further use - use it with caution!
Description
Finds all links on a webpage.
(Pending: Collects data depending on the given selector and follows links when activated)
Requirements
- node 8 (LTS)
Usage
Install this package as a global package.
npm i -g @kevroadrunner/crawler`
And run this basic command in your terminal.
crawl https://www.npmjs.com
For more information see the help option from the binary.
Help
crawl --help
crawl <url>
Set the url to be crawled
Options:
--version Show version number [boolean]
--filter, -f Set the filter for links [string] [default: ".*"]
--depth, -d Set the depth to follow links [number] [default: 0]
--help Show help [boolean]
API
Use this package in your application.
const {crawl} = require('@kevroadrunner/crawler')
;(async () => {
let links = await crawl('https://www.npmjs.com', '/package/')
})()