isbot
🤖/👨🦰 Recognise bots/crawlers/spiders using the user agent string.
🤖/👨🦰 Recognise bots/crawlers/spiders using the user agent string.
Parse robot directives within HTML meta and/or HTTP headers.
Parser for XML Sitemaps to be used with Robots.txt and web crawlers
Crawler made simple
A straightforward sitemap generator written in TypeScript.
It uses the user-agents.org xml file for detecting bots.
Opensource Framework Crawler in Node.js
Lightweight robots.txt parsing component without any external dependencies for Node.js.
Parser for XML Sitemaps to be used with Robots.txt and web crawlers
isbot Bundle UMD
A jQuery plugin that helps you to hide your email on your page and prevent crawlers to get it!
A set of shared utilities that can be used by crawlers
Lightweight robots.txt parsing component without any external dependencies for Node.js.
Parser for XML Sitemaps to be used with Robots.txt and web crawlers
Parser for XML Sitemaps to be used with Robots.txt and web crawlers
Site mapper utilizing puppeteer to generate a xml sitemap
🤖 detect bots/crawlers/spiders via the user agent.
Parser for XML Sitemaps to be used with Robots.txt and web crawlers. (Extended version by mastixmc)
A simple redis primitives to incr() and top() user agents
Parser for XML Sitemaps to be used with Robots.txt and web crawlers