Pre-requisite
Cloudwatch log group is created from where the data would be exported.
S3 bucket is created where you want to import the data.
Use Case 1
Below configuration is for the use case where Cloudwatch and S3 bucket are in the same account:
Configuration
First step is to create a Lambda instance that houses the source code for receiving CloudWatch events and storing them to our S3 instance.
Search for the Lambda service in your AWS account, navigate to Functions, select Create Function or select the existing function.
In Basic Information, we need to provide:
Function name
Runtime (Node.js 20x)
Instruction set Architecture (x86_64 default)
Make sure your newly created execution role or existing role has the following policy:
You can check the policy in Lambda > Configuration > Permission > click on the Role name
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "logs:CreateLogGroup", "logs:CreateLogStream", "logs:PutLogEvents", "logs:GetLogEvents", "logs:DescribeLogStreams", "logs:CreateExportTask", "logs:FilterLogEvents" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "s3:GetObject", "s3:PutObject", "s3:ListBucket" ], "Resource": [ "arn:aws:s3:::<S3_BUCKET_NAME>", "arn:aws:s3:::<S3_BUCKET_NAME>/*" ] } ] }
Replace <S3_BUCKET_NAME>
with actual bucket name.
In lambda function, navigate to Configuration > Environment Variables and click Edit.
Please mention below keys as it is, these keys are used in lambda function and change in key name will directly impact on lambda functioning.
Key | Value |
---|---|
LOGGROUP_NAME | <Cloudwatch log group name> |
MY_AWS_REGION | <AWS Region> |
PREFIX | <S3 Prefix> |
S3_BUCKET_NAME | <S3 bucket name> |
Please download the following zip file. Select Upload from in Code. Upload the zip and Save, please note that post uploading you might not be able to see the source code.
When using node.js 20, there are some aws-sdk
dependencies, which are handled in the zip along with the function code.
If you are creating new function and Eventbridge rule is not available in trigger: Click Add Trigger and choose Eventbridge.
In Rule, select Create new rule
Provide the Rule name
State the Rule description.
Schedule expression will act as CRON which will automatically trigger the event on matching expression. We are going to set the 2-minutes rate which invokes the lambda function every 2 minutes. You can specify as per organization’s policy or the interval you want lambda to execute.
Valid values: minute | minutes | hour | hours | day | days
Syntax:
rate(value unit)
Make sure your S3 bucket has following policy:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "<LAMBDA_EXECUTION_IAM_ROLE_ARN>" }, "Action": [ "s3:GetObject", "s3:PutObject", "s3:ListBucket" ], "Resource": [ "arn:aws:s3:::<S3_BUCKET_NAME>", "arn:aws:s3:::<S3_BUCKET_NAME>/*" ] } ] }
Replace S3_BUCKET_NAME
and LAMBDA_EXECUTION_IAM_ROLE_ARN
with your actual values.
You can get LAMBDA_EXECUTION_IAM_ROLE_ARN
from: Lambda > Configuration > Permission > click on the Role name > Copy the ARN
After completing these configurations, data would be exported from Cloudwatch to S3.
Use Case 2
Below configuration is for the use case where destination S3 bucket is in different account:
Configuration:
First step is to create a Lambda instance that houses the source code for receiving CloudWatch events and storing them to our S3 instance.
Search for the Lambda service in your AWS account, navigate to functions, select Create Function or select the existing function.
In Basic Information, we need to provide:
Function name
Runtime (Node.js 20x)
Instruction set Architecture (x86_64 default)
Set Up the Role in the Target Account
Create a role in the target account that your Lambda function will assume.
Navigate to AWS Services in the account of destination S3 bucket.
In Services > IAM > Roles
Click Create role (For future reference in this document, we will call it as CrossAcS3Write)
4. Select trusted entity as Custom trust policy
Provide below policy under custom trust policy
{ "Version": "2024-08-08", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "<Lambda_Execution_Role_ARN>" }, "Action": "sts:AssumeRole" } ] }
You can check the <Lambda_Execution_Role_ARN>
in the source account where you have created lambda.
Lambda > Configuration > Permission > click on the Role name
Copy the role ARN displayed and use it as showed in the above policy in target account trust policy creation.
Click Next
In Add permission, do not add any permission and click next for now.
Provide role details as name as
CrossAcS3Write
and description and click create role. (CrossAcS3Write
is just for our reference in the document, you can provide any name)Open a newly created role to grant Permissions to the Role
Permission > Add permission and provide below permission
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:GetObject", "s3:PutObject", "s3:ListBucket" ], "Resource": "arn:aws:s3:::<TARGET_BUCKET_NAME>/*" } ] }
Replace <TARGET_BUCKET_NAME>
with the name of the S3 bucket in the target account.
Set up target S3 bucket policy
Navigate to the target S3 bucket
Open the bucket
Navigate to Permissions > Bucket policy
Provide the below policy
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "<ROLE_ARN_CrossAcS3Write>" }, "Action": [ "s3:GetObject", "s3:PutObject", "s3:ListBucket" ], "Resource": [ "arn:aws:s3:::<TARGET_BUCKET_NAME>", "arn:aws:s3:::<TARGET_BUCKET_NAME>/*" ] } ] }
<ROLE_ARN_CrossAcS3Write>
is the role we created in previous steps. You can copy the ARN from Target Account > IAM > Roles > Open newly created CrossAcS3Write
role > copy ARN
Replace <TARGET_BUCKET_NAME>
with the name of the S3 bucket in the target account.
Set up lambda execution role policy
In the source account where lambda is created, make sure your newly created execution role or existing role has the following policy:
You can check the policy in Lambda > Configuration > Permission > click on the Role name
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "logs:CreateLogGroup", "logs:CreateLogStream", "logs:PutLogEvents", "logs:GetLogEvents", "logs:DescribeLogStreams", "logs:CreateExportTask", "logs:FilterLogEvents" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "s3:GetObject", "s3:PutObject", "s3:ListBucket" ], "Resource": [ "arn:aws:s3:::<TARGET_BUCKET_NAME>", "arn:aws:s3:::<TARGET_BUCKET_NAME>/*" ] }, { "Effect": "Allow", "Action": "sts:AssumeRole", "Resource": "<ROLE_ARN_CrossAcS3Write>" } ] }
<ROLE_ARN_CrossAcS3Write>
is the role we created in previous steps in the target S3 bucket account. You can copy the ARN from Target Account > IAM > Roles > Open newly created CrossAcS3Write
Role > copy ARN.
Replace <TARGET_BUCKET_NAME>
with the name of the S3 bucket in the target account.
Set up code and configurations in Lambda
In lambda function, navigate to Configuration > Environment Variables and click Edit.
Please mention below keys as it is, these keys are used in lambda function and any change in key name will directly impact on lambda functioning.
Key | Value |
---|---|
LOGGROUP_NAME | <Cloudwatch log group name> |
MY_AWS_REGION | <AWS Region> |
PREFIX | <Target S3 Bucket Prefix> |
S3_BUCKET_NAME | <Target S3 Bucket Name> |
TARGET_ACCOUNT_ROLE_ARN | <ROLE_ARN_CrossAcS3Write> |
Please download the following zip file. In Code, navigate to Lambda > Code > Upload from. Upload the zip and Save, please note that post uploading you might not be able to see the source code.
When using node.js 20, there are some aws-sdk
dependencies, which are handled in the zip along with the function code.
If you are creating new function and Eventbridge rule is not available in trigger: Click Add Trigger and choose Eventbridge
In Rule, select Create new rule
Provide the rule name
State the Rule description.
Schedule expression will act as CRON which will automatically trigger the event on matching expression. We are going to set the 2-minutes rate which invokes the lambda function every 2 minutes. You can specify as per organization’s policy or the interval you want lambda to execute.
Valid values: minute | minutes | hour | hours | day | days
Syntax: rate(value unit)
After completing these configurations, data would be exported from Cloudwatch to target S3 bucket in another account.