1.0.11 • Published 5 years ago

urls-crawler v1.0.11

Weekly downloads
6
License
ISC
Repository
github
Last release
5 years ago

Urls Crawler

About this Package

Provide a fully qualified url to fetch all urls belongs to that domain.

It will give Active and dead urls in a object as output

It will save output in a file named as urls.json

Install:

npm install urls-crawler

Fetch urls

const Urls = require('urls-crawler').default
let urls = new Urls("https://www.example.com/")

urls.getAllUrls()
.then( allUrls => {
  let activeUrls = allUrls.active
  let deadUrls   = allUrls.dead 
  console.log("Active urls: ", activeUrls)
  console.log("Dead urls: ", deadUrls)
})
.catch( err => console.log(err))

Fetch urls of a blog

let urls = new Urls("https://www.example.com/blog/")

You can Specify a regex in parameters for specific url paths, Like specifying

let urls = new Urls("https://www.example.com/", "/blog")

It will fetch all urls which have /blog in their url path

1.0.11

5 years ago

1.0.10

5 years ago

1.0.9

5 years ago

1.0.8

5 years ago

1.0.7

5 years ago

1.0.6

5 years ago

1.0.5

5 years ago

1.0.4

5 years ago

1.0.3

5 years ago

1.0.2

5 years ago

1.0.1

5 years ago

1.0.0

5 years ago