0.20.0 • Published 3 years ago

punchcard v0.20.0

Weekly downloads
10
License
Apache-2.0
Repository
github
Last release
3 years ago

CodeBuild badge Gitter npm version

Punchcard

Punchcard is a TypeScript framework for building cloud applications with the AWS CDK. It unifies infrastructure code with runtime code, meaning you can both declare resources and implement logic within the context of one node.js application. AWS resources are thought of as generic, type-safe objects — DynamoDB Tables are like a Map<K, V>; SNS Topics, SQS Queues, and Kinesis Streams feel like an Array<T>; and a Lambda Function is akin to a Function<A, B> – like the standard library of a programming language.

Blog Series

If you'd like to learn more about the philosophy behind this project, check out my blog series (WIP) Punchcard: Imagining the future of cloud programming.

Developer Guide

To understand the internals, there is the guide:

  1. Getting Started
  2. Creating Functions
  3. Runtime Dependencies
  4. Shapes: Type-Safe Schemas
  5. Dynamic (and safe) DynamoDB DSL
  6. Stream Processing

Tour

Runtime Code and Dependencies

Creating a Lambda Function is super simple - just create it and implement handle:

new Lambda.Function(stack, 'MyFunction', {
  handle: async (event) => {
    console.log('hello world');
  }
});

To contact other services in your Function, data structures such as SNS Topics, SQS Queues, DynamoDB Tables, etc. are declared as a Dependency.

This will create the required IAM policies for your Function's IAM Role, add any environment variables for details such as the Topic's ARN, and automatically create a client for accessing the Construct. The result is that your handle function is now passed a topic instance which you can interact with:

new Lambda.Function(stack, 'MyFunction', {
  depends: topic,
  handle: async (event, topic) => {
    await topic.publish({
      key: 'some key',
      count: 1,
      timestamp: new Date()
    });
  }
});

Furthermore, its interface is higher-level than what would normally be expected when using the aws-sdk, and it's also type-safe: the argument to the publish method is not an opaque string or Buffer, it is an object with keys and rich types such as Date. This is because data structures in punchcard, such as Topic, Queue, Stream, etc. are generic with statically declared types (like an Array<T>):

const topic = new SNS.Topic(stack, 'Topic', {
  /**
   * Message is a JSON Object with properties: `key`, `count` and `timestamp`.
   */
  shape: struct({
    key: string(),
    count: integer(),
    timestamp
  })
});

This Topic is now of type:

Topic<{
  key: string;
  count: number;
  timestamp: Date;
}>

Type-Safe DynamoDB Expressions

This feature in punchcard becomes even more evident when using DynamoDB. To demonstrate, let's create a DynamoDB Table and use it in a Function:

const table = new DynamoDB.Table(stack, 'my-table', {
  partitionKey: 'id',
  attributes: {
    id: string(),
    count: integer({
      minimum: 0
    })
  },
  billingMode: BillingMode.PAY_PER_REQUEST
});

Now, when getting an item from DynamoDB, there is no need to use AttributeValues such as { S: 'my string' }, like you would when using the low-level aws-sdk. You simply use ordinary javascript types:

const item = await table.get({
  id: 'state'
});

The interface is statically typed and derived from the definition of the Table - we specified the partitionKey as the id field which has type string, and so the object passed to the get method must correspond.

PutItem and UpdateItem have similarly high-level and statically checked interfaces. More interestingly, condition and update expressions are built with helpers derived (again) from the table definition:

// put an item if it doesn't already exist
await table.put({
  item: {
    id: 'state',
    count: 1
  },
  if: item => DynamoDB.attribute_not_exists(item.id)
});

// increment the count property by 1
await table.update({
  key: {
    id: 'state'
  },
  actions: item => [
    item.count.increment(1)
  ]
});

If you specified a sortKey:

const table = new DynamoDB.Table(stack, 'my-table', {
  partitionKey: 'id',
  sortKey: 'count', // specify a sortKey
  // ...
});

Then you can also build typesafe query expressions:

await table.query({
  key: {
    id: 'id',
    count: DynamoDB.greaterThan(1)
  },
})

Stream Processing

Punchcard has the concept of Stream data structures, which should feel similar to in-memory streams/arrays/lists because of its chainable API, including operations such as map, flatMap, filter, collect etc. Data structures that implement Stream are: SNS.Topic, SQS.Queue, Kinesis.Stream, Firehose.DeliveryStream and Glue.Table.

For example, given an SNS Topic:

const topic = new SNS.Topic(stack, 'Topic', {
  shape: struct({
    key: string(),
    count: integer(),
    timestamp
  })
});

You can attach a new Lambda Function to process each notification:

topic.notifications().forEach(stack, 'ForEachNotification', {
  handle: async (notification) => {
    console.log(`notification delayed by ${new Date().getTime() - notification.timestamp.getTime()}ms`);
  }
})

Or, create a new SQS Queue and subscribe notifications to it:

(Messages in the Queue are of the same type as the notifications in the Topic.)

const queue = topic.toSQSQueue(stack, 'MyNewQueue');

We can then, perhaps, map over each message in the Queue and collect the results into a new AWS Kinesis Stream:

const stream = queue.messages()
  .map({
    handle: async(message, e) => {
      return {
        ...message,
        tags: ['some', 'tags'],
      };
    }
  })
  .toKinesisStream(stack, 'Stream', {
    // partition values across shards by the 'key' field
    partitionBy: value => value.key,

    // type of the data in the stream
    type: struct({
      key: string(),
      count: integer(),
      tags: array(string()),
      timestamp
    })
  });

With data in a Stream, we might want to write out all records to a new S3 Bucket by attaching a new Firehose DeliveryStream to it:

const s3DeliveryStream = stream.toFirehoseDeliveryStream(stack, 'ToS3');

With data now flowing to S3, let's partition and catalog it in a Glue.Table (backed by a new S3.Bucket) so we can easily query it with AWS Athena, AWS EMR and AWS Glue:

import glue = require('@aws-cdk/aws-glue');

const database = stack.map(stack => new glue.Database(stack, 'Database', {
  databaseName: 'my_database'
}));
s3DeliveryStream.objects().toGlueTable(stack, 'ToGlue', {
  database,
  tableName: 'my_table',
  columns: stream.type.shape,
  partition: {
    // Glue Table partition keys: minutely using the timestamp field
    keys: {
      year: integer(),
      month: integer(),
      day: integer(),
      hour: integer(),
      minute: integer()
    },
    get: record => ({
      // define the mapping of a record to its Glue Table partition keys
      year: record.timestamp.getUTCFullYear(),
      month: record.timestamp.getUTCMonth(),
      day: record.timestamp.getUTCDate(),
      hour: record.timestamp.getUTCHours(),
      minute: record.timestamp.getUTCMinutes(),
    })
  }
});

Example Stacks

License

This library is licensed under the Apache 2.0 License.

0.20.0

3 years ago

0.19.0

4 years ago

0.18.0

4 years ago

0.17.11

4 years ago

0.17.10

4 years ago

0.17.7

4 years ago

0.17.8

4 years ago

0.17.9

4 years ago

0.17.6

4 years ago

0.17.5

4 years ago

0.17.3

4 years ago

0.17.4

4 years ago

0.17.2

4 years ago

0.17.0

4 years ago

0.17.1

4 years ago

0.16.2

4 years ago

0.16.0

4 years ago

0.16.1

4 years ago

0.16.0-rc3

4 years ago

0.16.0-rc2

4 years ago

0.16.0-rc1

4 years ago

0.15.0-rc3

4 years ago

0.15.0

4 years ago

0.15.0-rc1

4 years ago

0.15.0-rc2

4 years ago

0.14.0

4 years ago

0.13.0

4 years ago

0.12.0

4 years ago

0.11.0

4 years ago

0.10.3

4 years ago

0.10.2

4 years ago

0.10.1

4 years ago

0.10.0

4 years ago

0.9.2

5 years ago

0.9.1

5 years ago

0.9.0

5 years ago

0.8.0

5 years ago

0.7.0

5 years ago

0.6.0

5 years ago

0.5.1

5 years ago

0.5.0

5 years ago

0.4.0

5 years ago

0.3.0

5 years ago

0.2.0

5 years ago

0.1.4

5 years ago

0.1.3

5 years ago

0.1.2

5 years ago

0.1.1

5 years ago

0.1.0

5 years ago

0.0.0

12 years ago