1.0.0-next.25 • Published 1 month ago

@kavach/logger v1.0.0-next.25

Weekly downloads
-
License
MIT
Repository
github
Last release
1 month ago

@kavach/logger

A minimalistic and extensible JSON logger. It offloads the actual log-writing task to writers.

The logger adds the following attributes to the logged data.

  • logged_at: Timestamp of the log as ISO String
  • running_on: Identifies where the logger was invoked from (server/browser)
  • level: Logging level (error|warn|info|debug|trace)

Logged Data

The logged data is expected to be provided as an object. The attributes of the object are merged with the base attributes before sending it to the writer.

Below are some examples assuming that the logger has been called on the server side at 8 am on 2022-11-05. The logger does not do any validation on the data structure it receives. It is up to the developer to ensure that the LogWriter will be able to handle the data appropriately.

String message input

logger.info('foo')

This will be sent to the writer as

{
  "level": "info",
  "running_on": "server",
  "logged_at": "2022-11-05T08:00:00.000Z",
  "message": "foo"
}

Object as input

logger.info({ message: 'foo' })

This will be sent to the writer as

{
  "level": "info",
  "running_on": "server",
  "logged_at": "2022-11-05T08:00:00.000Z",
  "message": "foo"
}

Object as input (including nested detail)

logger.info({ message: 'foo', data: { path: 'bar' } })

This will be sent to the writer as

{
  "level": "info",
  "running_on": "server",
  "logged_at": "2022-11-05T08:00:00.000Z",
  "message": "foo",
  "data": { "path": "bar" }
}

Table structure

Depending on how you intend to send data to the logger, create the appropriate data store.

For example, with the "supabase" writer you would need a table having the structure below:

create table if not exists public.logs(
  id                       uuid default uuid_generate_v4()
, level                    varchar
, running_on               varchar
, logged_at                timestamp with time zone
, message                  varchar
, data                     jsonb
, written_at               timestamp with time zone default now()
)

if you expect your data to contain additional attributes at same level as message, columns should be included to capture those attributes as well.

1.0.0-next.25

1 month ago

1.0.0-next.24

1 month ago

1.0.0-next.23

2 months ago

1.0.0-next.22

3 months ago

1.0.0-next.21

3 months ago

1.0.0-next.19

3 months ago

1.0.0-next.20

3 months ago

1.0.0-next.18

4 months ago

1.0.0-next.17

4 months ago

1.0.0-next.16

6 months ago

1.0.0-next.10

7 months ago

1.0.0-next.15

6 months ago

1.0.0-next.14

6 months ago

1.0.0-next.13

6 months ago

1.0.0-next.9

7 months ago

1.0.0-next.6

1 year ago

1.0.0-next.5

1 year ago

1.0.0-next.4

1 year ago