0.0.2 • Published 12 months ago

@roserocket/dataloader v0.0.2

Weekly downloads
-
License
ISC
Repository
github
Last release
12 months ago

Roserocket Data Loader CLI

Usage

npm install -g @roserocket/dataloader
dataloader list

Prerequisites

  1. Org App configured within the Rose Rocket UI. Client ID and Client Secret will be required to authenticate dataloader

    Settings > Advanced Settings > Org Apps

  2. User credentials to the organization for the load. Username and password will be required to authenticate dataloader

  3. A recipe loaded to the organization to populate objects

Constraints

  1. Fields that support a list of values must not have the a comma in any single value

    • Value cannot be like Bond, James
  2. Field that take in date values should always be provided in UTC time following the format YYYY-MM-DDTHH:mm:ssZ

  3. Values that require selecting one or many from a list of options are case-sensitive

  4. The data loader tool will only write user enter-able fields during the describe command to a template file. Fields of the following type will be excluded:

    • Derived Fields (including FullID fields)
    • System Fields
    • Connection Fields that represent a reverse connection
    • Method Fields
  5. Child data must be loaded prior to any parent that references them. This means when doing a load for multiple objects the leaf nodes should be loaded first, then any parent nodes. For help in determining your load order use the list --order command

Commands

  • list -h

  • describe -h

  • load -h

  • delete -h

Options

Error modes (Load)

  • Lazy (Default) - Load will process records until the first failure, and then quit
  • Eager - Load will process all records while ignoring failures

Output Files (Describe, Load)

When using describe -o <output file path> an output file will generate a template file for the user to populate.

When using load -o <output file path> an output file will contain a copy of the input file with results appended in an additional column.

  • Successfully insert rows will have their id appended
  • Rows that failed validation will have a validation message appended
  • Rows that failed to insert due to an API error will have an API error message appended

Run modes (Load, Delete)

  • Regular (Default) - validate and process the data with a single command
  • Dry Run - only validate the data during a command

Verbosity

  • Verbose - Display additional logs
  • Brief (Default) - Standard logging output
  • Quiet - Display minimum log output, hide all except errors and tables

Example: dataloader list -v quiet

How to populate a template file

Running dataloader describe <object> -o <path-to-output-file> will generate a template file to be used to correctly structure data for the load command. The headers in the output file will match the fields present on your object.

CSV Input

Each column header will be printed in table format during the describe command with an "Expected Format" column outlining valid data expected for that column. Enter rows of data adhering to the expected format for each column, and once validated and loaded each row will become a record in Rose Rocket.

OData Input

Each column header represents a mapping between your OData field name/path and the Rose Rocket field name. Enter a single row which relates the column header to the value path in the OData source (see example below). The "Expected Format" column output during the describe command is still expected for data present in the OData response, and will be checked during validation.

Example Workflows

Basic list, describe, populate, load

Find all objects available in your organization, and choose one that you would like to load.

dataloader list

Having chosen the object to load, generate an template file to populate data into.

dataloader describe task -o task.csv

Open the output file and populate rows of data you want to be loaded. Once the file is filled, load it to your organization.

dataloader load task --file task.csv

As the load completes, results will appear in stdout and stderr. After all records are processed they can be viewed within your organization.

Dry run

Validate all records in a file against the object's configuration.

dataloader load task --file task.csv --dry-run

Odata

Load records into Rose Rocket from an OData endpoint. A mapping file is required to reference OData fields to the target object's fields, nested objects leverage dot notation.

dataloader describe customer -o customer.csv

Given the output file, enter OData field names for each column - nested fields can be accessed with dot notation

name,contact
customer_name,contact_methods.phone

Load the data from the OData endpoint, leveraging the mapping file

dataloader load customer --url url://to.odata/query --map customer.csv

Generating Output files

When running dataloader with a large number of records, the stdout and stderr results might be too much to parse. To better understand the result of each record during a load file outputs can be used.

dataloader load customer --file customer.csv --output load-results.csv

The load-results.csv file will contain an additonal column outlining the primary key for inserted records, or a validation message for failed records.

Running locally

Generate an auth token using a username and password from legacy for calling the API which is stored in environment variable RR_AUTH_TOKEN

Optionally pre-set parameters for the authentication script

. ./utils/auth.sh -u <username> -p <password> -i <client_id> -s <client_secret>

Load dependencies

npm install

Link the npm package so it can be run (required only on first run)

npm run first-time-setup

Run the CLI

npm start -- <command> <arguments> <options>

OR

npm run build
dataloader <command> <arguments> <options>