The purpose of this Lambda function is to continuously generate lightweight “heartbeat” logs and store them in one or more Amazon S3 buckets.
This ensures that monitoring systems such as ArcSight SmartConnectors always detect recent activity in the bucket, allowing teams to distinguish between:
Connector issues (no data reaching ArcSight), and
Source system inactivity (product temporarily not generating logs).
The Lambda periodically writes a small JSON object to each configured S3 target, simulating a regular log entry. These files are then read and forwarded by existing log collectors.
Component
Description
AWS Lambda Function
Generates and uploads heartbeat files at regular intervals (for example, every 5 minutes).
Amazon S3 Bucket(s)
Destination for the heartbeat log files. Each target consists of a bucket and an optional folder/prefix.
Amazon EventBridge (CloudWatch Events)
Triggers the Lambda on a recurring schedule.
IAM Role
Grants the Lambda permission to write to S3 and to publish logs to CloudWatch.
Before starting:
You must have access to the AWS Management Console with permission to create Lambda functions, IAM roles, and EventBridge rules.
One or more S3 buckets where logs are stored and monitored.
A copy of the Lambda source code file (provided separately).
Open the IAM service in the AWS Console.
Choose Roles → Create role.
Under Trusted entity type, select AWS service, then choose Lambda as the use case.
Click Next.
Attach the following managed policy:
AWSLambdaBasicExecutionRole (provides CloudWatch Logs permissions)
Choose Next again.
On the Add permissions page, select Create inline policy, and define a policy that allows the function to write to your target buckets, for example:
Service: S3
Action: PutObject
Resources: each of your buckets, using the pattern arn:aws:s3:::bucket-name/*
Review and create the role (e.g., name it lambda-heartbeat-writer-role).
If your S3 buckets enforce KMS encryption, also add KMS permissions (Encrypt, GenerateDataKey*) for the relevant key.
In the AWS Console, go to Lambda → Create function.
Choose Author from scratch.
Function name: s3-heartbeat-writer (or similar).
Runtime: Python 3.11.
Execution role: select Use an existing role and choose the IAM role created earlier.
Click Create function.
Under the Code tab, upload the provided Python file (from your separate source file).
Click Deploy once uploaded.
In the Lambda configuration view, open Environment variables → Edit.
Add the following key-value pair:
Key: TARGETS
Value:
[{"bucket":"scapitalvc","prefix":"Avanan-logs/"}]
This format supports multiple targets. To expand in the future, simply append more entries, for example:
[
{"bucket":"scapitalvc","prefix":"Avanan-logs/"},
{"bucket":"client2-logs","prefix":"Office365/"},
{"bucket":"backup-logs","prefix":"Avanan-backup/"}
]
(Optional) You can also define other environment variables such as:
PRODUCT_NAME – a friendly identifier for your log source.
TTL_SECONDS – how long the object should be considered valid (for internal tracking).
STORAGE_CLASS – e.g., STANDARD or STANDARD_IA.
Click Save after adding the variables.
In the function’s Triggers tab, click Add trigger.
Choose EventBridge (CloudWatch Events).
Create a new rule:
Rule name: heartbeat-every-5m
Schedule expression: cron(0/5 * * * ? *)
(runs every 5 minutes)
Click Add.
EventBridge now invokes the Lambda automatically on the chosen interval.
From the Lambda console, click Test and run the function once manually.
Confirm a successful return value (ok: true).
In the S3 bucket, navigate to your configured prefix (e.g., Avanan-logs/heartbeat/...) and verify that a new JSON file was created.
Confirm that your monitoring connector ingests and forwards this heartbeat log as expected.
To avoid long-term storage buildup:
In the S3 bucket → Management → Lifecycle rules, create a new rule.
Filter by prefix Avanan-logs/heartbeat/.
Set the objects to expire after 7–30 days depending on your retention policy.
Lambda logs are visible in CloudWatch Logs, under /aws/lambda/<function-name>.
Set up a simple CloudWatch Alarm to notify if the function starts failing or if it stops producing new objects.
Updating the heartbeat targets requires only editing the TARGETS environment variable—no redeployment needed.
This setup ensures continuous visibility into your log pipeline’s health:
Component
Purpose
Lambda function
Periodically writes a small “I’m alive” heartbeat to S3.
S3 bucket(s)
Receives heartbeats in the same path as other logs, ensuring the connector always sees recent data.
EventBridge rule
Automates the schedule (e.g., every 5 minutes).
IAM role
Provides secure, least-privilege access for the Lambda.
With this in place, your SOC or monitoring platform can clearly differentiate between connector downtime and normal log silence from the product source—eliminating unnecessary alerts.