Creating Multiple AWS EventBridge Rules Using CDKTF and a Config File from S3

When working with AWS EventBridge, it’s common to create multiple rules with different schedules, targets, and configurations.

Initially, I was creating a separate CDKTF function or resource block for each EventBridge rule. This worked, but the code quickly became repetitive and hard to maintain.

So I moved to a config-driven approach, where EventBridge rules are created dynamically based on a JSON configuration file stored in S3.

This blog explains the idea, the approach, and why it worked well for my use case.

The Problem

As the number of EventBridge rules increased, I faced a few issues:

  • Repeating the same CDKTF code for every rule
  • Small changes required code updates and redeployments
  • Hard to manage different schedules and targets cleanly
  • Code readability reduced as more rules were added

I wanted a way to define EventBridge rules without writing new infrastructure code every time.

The Idea

Instead of defining each EventBridge rule in CDKTF, I decided to:

  • Store EventBridge configurations in a JSON file
  • Upload that file to Amazon S3
  • Download and read the config file during CDKTF deployment
  • Loop through the config and create EventBridge rules dynamically

With this approach:

  • Adding a new EventBridge rule only requires updating the JSON file and re-run the deployment steps
  • No new CDKTF code is needed for each rule
  • Infrastructure logic remains clean and reusable

High-Level Architecture

Flow:

  1. CDKTF deployment starts
  2. ConfigJSON config file is downloaded from S3
  3. Config file is parsed
  4. For each entry in the config:
    • An EventBridge rule is created
    • Target resources are attached

Configuration File Structure (S3)

The JSON file stored in S3 contains all EventBridge definitions.

Each entry defines:

  • Rule name
  • Schedule expression
  • Input
  • Optional metadata
[
    {
        "name": "default2",
        "description": "The default configuration set",
        "values": {
            "cron_schedule": "0 0 1 1 ? 2000",
            "cron_enabled": false,
            "input": {
                "job_id": "2"
            }
        }
    },
    {
        "name": "default2",
        "description": "The default configuration set",
        "values": {
            "cron_schedule": "0 0 1 1 ? 2000",
            "cron_enabled": false,
            "input": {
                "job_id": "2"
            }
        }
    }
]

Sample code cdktf code snippet

type configJSON = {
    name: string;
    description: string;
    values: {
        cron_schedule: string,
        cron_enabled: boolean,
        input: object
    }
};

for (let i = 0; i < eventConfigs.length; i++) {

    const config: configJSON = eventConfigs[i];

    // Skip if cron is disabled
    if (!config.values.cron_enabled) {
        continue;
    }

    const rule = new cloudwatchEventRule.CloudwatchEventRule(
        this,
        `event-rule-${i}`,
        {
            name: config.name,
            description: config.description,
            scheduleExpression: `cron(${config.values.cron_schedule})`,
            isEnabled: true
        }
    );

    new cloudwatchEventTarget.CloudwatchEventTarget(
        this,
        `event-target-${i}`,
        {
            rule: rule.name,
            arn: "", // Lambda / Step Function / Batch
            input: JSON.stringify(config.values.input)
        }
    );
}

Benefits of This Approach

This approach gave me several benefits:

  • ✅ Less CDKTF code
  • ✅ Easier to scale (add 10 or 100 rules easily)
  • ✅ Configuration changes without code changes
  • ✅ Better separation of config and infra logic
  • ✅ Cleaner and more readable CDKTF stack

Things to Keep in Mind

While this works well, a few things are important:

  • Validate the config file before using it
  • Handle missing or invalid fields safely
  • Control access to the S3 config file
  • Keep versioning enabled on the S3 bucket

Conclusion

Using a config file from S3 to create multiple EventBridge rules dynamically helped me reduce code duplication and made the infrastructure more flexible.

This pattern works well when:

  • You expect frequent changes in scheduling
  • Multiple similar resources need to be managed
  • You want infra to be configuration-driven

This approach can also be extended to other AWS resources like Lambda, SQS, or Step Functions.

Thanks for reading! If you found this helpful, let’s connect on LinkedIn and continue sharing knowledge and experiences:Lokesh Vangari

Lokesh Vangari

Total
0
Shares
Leave a Reply

Your email address will not be published. Required fields are marked *

Previous Post

How to Hire and Manage Remote Teams Effectively in 2026

Next Post

AI powered Labs 2026 | April 2026

Related Posts