1.3.0 • Published 5 months ago

ytid v1.3.0

Weekly downloads
-
License
Apache-2.0
Repository
github
Last release
5 months ago

Create URL friendly short IDs just like YouTube.

Suitable for generating -

  • short IDs for new users.
  • referral code for a user in an affiliate program.
  • file names for user uploaded documents / resources.
  • short URLs (like bitly) for sharing links on social media platforms.
  • URL slug for dynamically generated content like blog posts, articles, or product pages.

Works with ES6 (ECMAScript):

as well as with CommonJS:

Installation

Using npm:

npm i ytid

Using yarn:

yarn add ytid

Using pnpm:

pnpm i ytid

Usage

With ES6 (ECMAScript):

import { ytid } from "ytid";

console.log(ytid()); // gocwRvLhDf8

With CommonJS:

const { ytid } = require("ytid");

console.log(ytid()); // dQw4w9WgXcQ

FAQs

What are the possible characters in the ID?

YouTube uses 0-9, A-Z, a-z, _ and - as possible characters for the IDs. This makes each position in the ID have one of these 64 characters. However, as capital I and lowercase l appear very similar in the URLs (I → I, l → l), ytid excludes them.

Hence, ytid uses 0-9, A-H, J-Z, a-k, m-z, _ and - as possible characters in the ID.

Why should URL IDs be short?

A Backlinko's study, based on an analysis of 11.8 million Google search results, found that short URLs rank above long URLs.

And a Brafton study found a correlation between short URLs and more social shares, especially on platforms such as Twitter which have character limits.

These studies highlight the benefits of short URLs over long ones.

What if the ID contains any offensive word or phrase?

All the generated IDs are checked against a dataset of offensive / profane words to ensure they do not contain any inappropriate language.

As a result, ytid doesn't generate IDs like 7-GoToHell3 or bashit9RcYjcM.

The dataset of offensive / profane words is a combination of various datasets -

These datasets undergo the following preprocessing steps -

  1. Firstly, all the datasets are combined into a single dataset.
  2. Then the duplicate instances are removed.
  3. Then two new datasets are created -
    1. A dataset in which all spaces are replaced with -.
    2. A dataset in which all spaces are replaced with _.
  4. These two datasets are then combined to form a new dataset.This ensures that the dataset contains phrases with spaces in the form of hyphen separated words as well as underscore separated words.
  5. Then, duplicate values are removed from this new dataset.
  6. Finally, only the instances that match the regex pattern ^[A-Za-z0-9_-]{0,11}$ are kept, while the rest are removed. This keeps the number of instances to a minimum by removing unnecessary words or phrases.

Preprocessing yields a dataset of 3656 instances, that helps ensure the generated IDs are safe for using in URLs and for sharing on social media platforms.

The preprocessing was done on this Colab Jupyter notebook.

Future release(s) will expand the dataset to include words / phrases from other languages (that use English alphabets).

Stargazers

License

Apache-2.0