sitemapper
Parser for XML Sitemaps to be used with Robots.txt and web crawlers
Parser for XML Sitemaps to be used with Robots.txt and web crawlers
Scrape data from any webpage.
Url scraper which takes the text input and finds the links/urls, scraps them using cheerio and will returns an object with original text, parsed text (using npm-text-parser) and array of objects where each object contains scraped webpage's information.
An autonomous webcrawler for indexing robots.txt files.
Webcrawler script to retrieve the daily menu of the Bern University of Applied Sciences cantina in Biel
A simple webcrawler that prints out the URLs of the pages it encounters. Runs in parallel, up to a limit you specify.
Search musics on youtube and return your links
Crawls through provided website, checking for 200 response, content load, ssl cert errors, and more!
This tool allow you to parse, collect and traverse through the radiation monitoring data provided by ROSATOM SARMS(Sectoral Automated Radiation Monitoring System).
Parser for XML Sitemaps to be used with Robots.txt and web crawlers
A sitemapping tool for crawlers
CLI for download manga and serve it locally.
A friendly javascript pre-rendering engine - BETA (UNSTABLE)
Simple framework for crawling/scraping web sites. The result is a tree, where each node is a single request.
This is a function that accepts 3 arguments, "url", "tag" and "output", and writes to a file, in the "output" path, the content of an html "tag", relative to a specific "url".
MyReadingManga WebCrawler. Find By Languages, Genres and Tags.
Yet to describe
Download README files from GitHub repository links
Web Crawler to create directed graph of links among connected sites. Runs with Node.js and stores data with Redis
Parser for XML Sitemaps to be used with Robots.txt and web crawlers