Table of Contents | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
|
...
A user who's a Global Administrator or AWS EC2, System Manager, CloudWatch, IAM Administrator.
We support two methods to ingest logs into Chronicle from AWS Security Hub. The Lambda+S3+SQS method is preferred, while the Data Firehose method is secondary.
Based on our observations, Firehose generally incurs higher costs.
Method 1: Lambda+S3+SQS
To configure Linux Audit System (AUDITD) service logging in Linux OS
...
As Adaptive MxDR support logging through S3 or SQS, we have to export logs from cloudwatch to S3 and attach SQS to S3 bucket. Please follow below page to automate continuous export of logs from cloudwatch loggroup to S3 bucket using lambda.
CloudWatch Log Group to S3 Bucket Export Automation using Lambda
Once logs are available in the S3 bucket, If you wish to configure SQS, please follow below steps:
Follow the guide provided above for log export from Cloudwatch log group to S3 bucket.
Create SQS and attach it with S3. Please refer: Configuring AWS Simple Queue Service (SQS) with S3 Storage
Method 2: Amazon Data Firehose
Please find more details on Firehose Configuration on the below page:
Amazon Data Firehose to Chronicle: C2C-Push
Important Links:
For more details on how to get required credentials for integration parameters please refer:Get Credentials for AWS Storage
Please refer below page to check required IAM user and KMS Key policies for S3, SQS and KMS: IAM User and KMS Key Policies Required for AWS
Below are the URL details which we need to allow for connectivity (Please identify URLs by referring AWS document according to your services and regions):
IAM: For any logging source IAM URL should be allowed: https://docs.aws.amazon.com/general/latest/gr/iam-service.html
S3: For S3 or SQS logging source, S3 URL should be allowed: https://docs.aws.amazon.com/general/latest/gr/s3.html
SQS: For SQS logging source, SQS URL should be allowed: https://docs.aws.amazon.com/general/latest/gr/sqs-service.html
Integration Parameters
S3SQS:
Parameter | Default Value | Description |
---|---|---|
REGION | N/A | Select the region of your S3 bucket |
QUEUE NAME | N/A | The |
SQS queue name. | ||
ACCOUNT NUMBER | N/A | The |
FILES
: The URI points to a single file which will be ingested with each execution of the feed.
FOLDERS
: The URI points to a directory. All files contained within the directory will be ingested with each execution of the feed.
FOLDERS_RECURSIVE
: The URI points to a directory. All files and directories contained within the indicated directory will be ingested, including all files and directories within those directories, and so onaccount number for the SQS queue and S3 bucket. | ||
QUEUE ACCESS KEY ID | N/A | This is the 20 character ID associated with your Amazon IAM account. |
QUEUE SECRET ACCESS KEY | N/A | This is the 40 character access key associated with your Amazon IAM account. |
SOURCE DELETION OPTION | N/A | Whether to delete source files after they have been transferred to Chronicle. This reduces storage costs. Valid values are:
|
S3 BUCKET ACCESS KEY ID |
No | This is the 20 character ID associated with your Amazon IAM account. |
SECRET ACCESS KEY
N/A
This is the 40 character access key associated with your Amazon IAM account.
SQS:
Property
Default Value
Description
REGION
N/A
Select the region of your S3 bucket
QUEUE NAME
N/A
The SQS queue name.
ACCOUNT NUMBER
N/A
The account number for the SQS queue and S3 bucket.
QUEUE ACCESS KEY ID
N/A
This is the 20 character ID associated with your Amazon IAM account.
QUEUE SECRET ACCESS KEY
Only specify if using a different access key for the S3 bucket. | ||
S3 BUCKET SECRET ACCESS KEY | No | This is the 40 character access key associated with your Amazon IAM account. Only specify if using a different access key for the S3 bucket. |
SOURCE DELETION OPTION
N/A
Whether to delete source files after they have been transferred to Chronicle. This reduces storage costs. Valid values are:
SOURCE_DELETION_NEVER
: Never delete files from the source.
SOURCE_DELETION_ON_SUCCESS
:Delete files and empty directories from the source after successful ingestion.
SOURCE_DELETION_ON_SUCCESS_FILES_ONLY
:Delete files from the source after successful ingestionASSET NAMESPACE | No | To assign an asset namespace to all events that are ingested from a particular feed, set the |
Integration via Amazon Data Firehose:
Configure Amazon Data Firehose on Google Chronicle instance and copy a Endpoint URL & Secret key.