2.0.0 • Published 10 months ago

sauron-crawler v2.0.0

Weekly downloads
-
License
ISC
Repository
github
Last release
10 months ago

Basic page crawler written in Node.js

Crawler is designed to work as a test tool. It can extract all links from a given site along with additional information like statusCode, page title etc. This can be then displayed on console or dumped to csv or json file.

Example use cases

  • check site for broken (404) links
  • extract image urls from site
  • crawl auction/ecommerce pages to calculate avarage prices
  • extract all phone numbers from site

Requirements

Starting from version 1.3.0 Node version >= 15 may be required. Starting from version 2.0.0 Node version >= 18.16.0 is required.

Git version usage - up to version 2.0.0

  • crawl based on a config file
node .\sauron.js .\sample.config.json 
  • same as above but start with a list of urls
node .\sauron.js .\configs\sample.config.json .\configs\list.input.json
npm run serve:demo

NPM version usage - from version 2.0.0

  • create a folder in the project root directory to store all related files for example crawler

  • create sauron.settings.js file in the root project directory

module.exports = {
    comments: {
        customDirectory: 'Directory to store custom parsing functions',
        outputDirectory: 'Directory to store output files',
        saveDirectory: 'Directory to store save files',
    },
    customDirectory: './crawler/custom/',
    outputDirectory: './crawler/output/',
    saveDirectory: './crawler/save/',
};
  • all "custom" (custom.customFile) js files must be placed in the customDirectory specified above. In the config file provide a path relative to that folder.

  • crawl based on a config file

npx sauron .\configs\sample.config.json 
  • same as above but start with a list of urls
npx sauron .\configs\sample.config.json .\configs\list.input.json

Config file example

{
    "id":             "projectId",
    "startURL":       "http://example.com",
    "output":         "csv",
    "storeDefaultData": true,
    "custom": {
        "useCustom": true,
        "customFile": "custom.blank"
    },
    "allowedDomains": [
        "example.com",
        "test.example.com"
    ],
    "allowedProtocols": [
        "http:",
        "https:"
    ],
    "dedupeProtocol": true,
    "allowLinksFrom": {
        "pattern":        "^.*",
        "pathnameAllow": [],
        "pathnameDeny":  []
    },
    "crawlLinks": {
        "pattern":       "^.*",
        "pathnameAllow": [],
        "pathnameDeny":  []
    },
    "saveCrawlData": {
        "pattern":       "^.*",
        "pathnameAllow": [],
        "pathnameDeny":  []
    },
    "httpAuth": {
        "enable": false,
        "user":   "login",
        "pass":   "pass"
    },
    "cookieURL": null,
    "cookies": [],
    "requireValidSSLCert": false,
    "storeDefaultData": true,
    "saveStatusEach": 1000,
    "verbose": false,
    "requestCount": 4,
    "maxPages": -1,
    "stripGET": false,
    "timeout":  5000
}

Config file docs

OptionValueDescription
idstringCrawl id - used in output file name etc.
startURLstringUrl to start crawl from
outputstringCrawl output method. Allowed values: console, csv, json, blank
storeDefaultDatabooleanStore default 'output' data with links, statusCodes etc - can be disabled when output is set to 'blank'
customobjectCustom parsing actions settings
allowedDomainsarrayOnly domains from this array will be crawled. Empty array will discard this check.
allowedProtocolesarrayPage protocols to crawl. Allowed values: http, https. Empty array will discard this check.
dedupeProtocolbooleanDe-duplicate links based on protocol.
allowLinksFromobjectOnly links that are found on a urls that matche given requirements will be crawled.
crawlLinksobjectOnly links that matche given requirements will be crawled. Example patter to exclude "/files/" path and PDF files ^(.(?!.*\\/files\\/|.*\\.pdf$))*
saveCrawlDataobjectOnly links that matche given requirements will be saved to output.
httpAuthobjectSettings for basic authentication
cookieURLstringURL for the cookie
cookiesarrayEach cookie is a JSON entry; docs: https://www.npmjs.com/package/tough-cookie
requireValidSSLCertbooleanCheck is SSL certificates valid
saveStatusEachnumberSave status each N crawls to enable abort and continue later
verbosebooleanPrint more output to console
requestCountnumberNumber of requests to be run in one batch
maxPagesnumberMax pages to crawl. To have no limit set -1
stripGETbooleanStrip GET parameters from links
timeoutnumberSingle request timeout in ms

Changelog

  • v2.0.0
    • moving to npm package usage
  • v1.4.6
    • add jest test
    • minor fixes
  • v1.4.5
    • save custom.data with "saveStatusEach" - now when custom action has a "data" property, which can be an array or an object, it will be stored in the save file every N crawled URLs
    • tidy sample config items position and descriptions
  • v1.4.0
    • saves each output so separate directory under ./output with name equal to crawl start time
    • when stripGET is enabled pathAllow/Deny is considered within the full URL including GET parameters
  • v1.3.0
  • v1.2.0
    • code cleanup
    • added settings.json for application settings
    • add option to disable storing default data like links, statusCodes etc.
    • add option to save progress after every N pages crawled - this is then picked up automatically on next crawl from the same config file (by config.id)
    • creat output and save folders on app start - remove check on each save
    • added option for verbose output to console
  • v1.1.0
    • fix pathnameAllow/Deny check
    • fix pattern check
    • fix typos in markup
    • add colors to console dump
    • add log for urls not crawled due to configuration
    • add crawl progress in percent

Donate

If you find this piece of code to be useful, please consider a donation :)

paypal