Announcing TriggerMesh 1.26

Jonathan Michaux

Jonathan Michaux

Jul 6, 2023
Announcing TriggerMesh 1.26
Announcing TriggerMesh 1.26

The latest release of TriggerMesh includes even more improvements to authentication with AWS sources and targets, improvements to the events generated by the Google Cloud Storage source, and some other performance and customization improvements. 

Heads-up: these updates include some potentially breaking changes for those using IAM role-based authentication with AWS, and for those filtering against the Google Cloud Storage source event type. We apologize for any inconvenience caused by these changes, but we believe they are necessary and that the migration relatively simple. If you're stuck, please reach out to us on Slack so we can help. 

Customize the service account used for AWS IAM role authentication (API change)

When TriggerMesh AWS connectors authenticate using IAM roles, the service account used by the connector is associated with an IAM role and a Trust Relationship. Up until now, because TriggerMesh would generate a new service account for every AWS component created, that meant users had to repeat the process of creating IAM roles and Trust Relationships for each component. This is quite a tedious process, and users have asked if a single shared service account could be used for multiple TriggerMesh connectors. We also got the request to be able to customize the name of this service account according to whatever naming conventions your organization is using. 

From 1.26 onwards, a new service account parameter was added to the AWS connector auth parameter. 

Warning: this required us to evolve the structure of the auth parameters, meaning that a migration is necessary in order to use the new parameters. 

The spec.auth.iamRole parameter was deprecated, in favor of a new spec.auth.iam object with roleArn and serviceAccount attributes. If you install 1.26 and try to create an AWS component with the old version of the spec it will be rejected, however existing running components will continue to work. To update them, you’ll need to make a trivial update to move to the new spec: roleArn expects the value that was previously set in iamRole parameter. And serviceAccount is the new Kubernetes service account name parameter that you can use to create a new service account or re-use an existing one of the same name.

The below example shows the AWS auth spec before 1.26:

apiVersion: sources.triggermesh.io/v1alpha1
kind: 
metadata:
  name: foo
spec:
  arn: 
  auth:
    iamRole: arn:aws:iam::123456789012:role/dev-role

In the above example based on TriggerMesh 1.25, a service account with a preconfigured name will be created for the source by the controller.

From 1.26 onwards, the spec has evolved as illustrated in the examples below. The first example shows the same spec as above with the simple format update, showing how you can easily migrate from one to the other.

apiVersion: sources.triggermesh.io/v1alpha1
kind: 
metadata:
  name: foo
spec:
  arn: 
  auth:
    iam:
      roleArn: arn:aws:iam::123456789012:role/dev-role

In the next example, we're using the new serviceAccount parameter. Service account aws-source-sa will be created by the controller. If a service account with this name already exists, the controller checks the labels, and, if the label managed-by has the value triggermesh-controller, the service account's owners list is updated with the current object. If the label managed-by does not exist or it has a value different from triggermesh-controller, reconciliation is skipped. Then this SA is assigned to the adapter deployment.

The service account's "eks.amazonaws.com/role-arn" annotation can be overwritten by the controller only if the service account is managed by the triggermesh controller and there is only one owner to it, otherwise the annotation is ignored to avoid reconciliation conflicts.

apiVersion: sources.triggermesh.io/v1alpha1
kind: 
metadata:
  name: foo
spec:
  arn: 
  auth:
    iam:
      roleArn: arn:aws:iam::123456789012:role/dev-role
      serviceAccount: aws-source-sa

Using temporary credentials with AWS IAM Roles Anywhere

AWS IAM roles anywhere lets you authenticate to AWS using IAM roles from workloads that are running outside of AWS. It works by generating temporary credentials for the external workloads and digital certifactes for the initial handshake.

To use this feature from a cluster with TriggerMesh running outside of AWS, you can use the combination of accessKeyID, secrectAccessKey, and sessionToken authentication parameters from any AWS source and target connectors.

This set of temporary credentials can be requested from AWS, for example by using the AWS CLI as follows:

$ aws sts get-session-token
{
    "Credentials": {
        "AccessKeyId": "redacted",
        "SecretAccessKey": "redacted",
        "SessionToken": "redacted",
        "Expiration": "2023-06-07T21:25:54+00:00"
    }
}

AccessKeyId, SecretAccessKey, and SessionToken from the response can be used in the TriggerMesh connector auth spec to gain temporary access to the AWS API, shown here in an example using AWS SQS target, in which the parameters were stored as Kubernetes secrects:

apiVersion: targets.triggermesh.io/v1alpha1
kind: AWSSQSTarget
metadata:
  name: triggermesh-aws-sqs
spec:
  arn: arn:aws:sqs:1234abcd:queue_name
  auth:
    credentials:
      accessKeyID:
        valueFromSecret:
          name: aws
          key: AWS_ACCESS_KEY_ID
      secretAccessKey:
        valueFromSecret:
          name: aws
          key: AWS_SECRET_ACCESS_KEY
      sessionToken:
        valueFromSecret:
          name: aws
          key: AWS_SESSION_TOKEN

Improved Google Cloud Storage events (API change)

The Google Cloud Storage source has been improved in order to produce more detailed event types that indicate exactly what happened to the object in question. Previously, all events produced were of type com.google.cloud.storage.notification. Now, the four produced types are as follows:

  • com.google.cloud.storage.objectfinalize: an object was successfully created
  • com.google.cloud.storage.objectmetadataupdate: an object’s metadata was changed
  • com.google.cloud.storage.objectdelete: an object was permanently deleted
  • com.google.cloud.storage.objectarchive: specific to buckets that have objection versioning activated.

If you had written filters or any other processing logic against the original com.google.cloud.storage.notification event type, then you will need to update that logic to account for the new event types. If you want to continue processing all Google Cloud Storage event types like before, you can either use a prefix or combination logic on the Trigger filter. 

To learn more about GCS bucket notifications, head to the related section of the Google Cloud documentation.

And more!

Customize the DataDog Target 'site'

DataDog has multiple sites (aka regions) for their managed service, and you can now specify which site you want to use from the TriggerMesh DataDog target with the new optional site parameter. site defaults to datadoghq.com. Visit Getting Started with Datadog Sites for more information. Big thanks to Adrien for this contribution 🤗. 

Improved performance of the Azure Service Bus source

The TriggerMesh Azure Service Bus source receives events in batches from Azure. In order to improve the rate at which events are then propagated to a sink such as the TriggerMesh broker, we have increased the concurrency with which these batches are forwarded on to their destination. We've also surfaced a new maxConcurrent parameter which lets you tweak the concurrency level, which defaults to 10. 

New tmctl options

tmclt now lets you dump your local configuration as pure Kubernetes deployments and services. This means that for simple use cases or testing, you can deploy what you've done locally, on Kubernetes, with zero dependencies. You don't need to install Knative or TriggerMesh on the cluster. This isn't recommended for large scale production deployments. It works like this: tmctl dump -p kubernetes-generic.

We've also added a way to send batches of events into a TriggerMesh broker thanks to a new parameter for the send-event command: tmctl send-event --file mytestevents.json

Lastly, the tmctl delete command now requires that you specify the type of the object you want to delete. So for example, previously you could write tmctl delete my-http-target, which now becomes tmctl delete target my-http-target. The goal behind this change is to provide better consistency across all tmctl operations, to make the CLI easier to learn and use. 

Head over to the tmctl reference documentation for an overview of all the commands available!

Create your first event flow in under 5 minutes