Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Table of Contents
minLevel1
maxLevel63
outlinefalse
styledisc
typelist
printabletrue

...

A user who's a Global Administrator or AWS EC2, System Manager, CloudWatch, IAM Administrator.

We support two methods to ingest logs into Chronicle from AWS Security Hub. The Lambda+S3+SQS method is preferred, while the Data Firehose method is secondary.

Based on our observations, Firehose generally incurs higher costs.

Method 1: Lambda+S3+SQS

To configure Linux Audit System (AUDITD) service logging in Linux OS

...

As Adaptive MxDR support logging through S3 or SQS, we have to export logs from cloudwatch to S3 and attach SQS to S3 bucket. Please follow below page to automate continuous export of logs from cloudwatch loggroup to S3 bucket using lambda.

CloudWatch Log Group to S3 Bucket Export Automation using Lambda

Once logs are available in the S3 bucket, If you wish to configure SQS, please follow below steps:

  1. Follow the guide provided above for log export from Cloudwatch log group to S3 bucket.

  2. Create SQS and attach it with S3. Please refer: Configuring AWS Simple Queue Service (SQS) with S3 Storage

Method 2: Amazon Data Firehose

Please find more details on Firehose Configuration on the below page:

Amazon Data Firehose to Chronicle: C2C-Push

Important Links:

Integration Parameters

S3:

...

Property

...

Default Value

...

Description

...

REGION

...

N/A

...

Select the region of your S3 bucket

...

S3 URI

...

N/A

...

The S3 URI to ingest. (It will be a combination of S3 bucket name and prefix. Eg- S3://<S3 bucket name>/<prefix>)

...

URI IS A

...

N/A

...

The type of file indicated by the URI. Valid values are:

  • FILES: The URI points to a single file which will be ingested with each execution of the feed.

  • FOLDERS: The URI points to a directory. All files contained within the directory will be ingested with each execution of the feed.

  • FOLDERS_RECURSIVE: The URI points to a directory. All files and directories contained within the indicated directory will be ingested, including all files and directories within those directories, and so on.

...

SOURCE DELETION OPTION

...

N/A

Whether to delete source files after they have been transferred to Chronicle. This reduces storage costs. Valid values are:

...

...

SOURCE_DELETION_ON_SUCCESS:Delete files and empty directories from the source after successful ingestion.

...

SOURCE_DELETION_ON_SUCCESS_FILES_ONLY:Delete files from the source after successful ingestion.

SQS:

Property

Parameter

Default Value

Description

REGION

N/A

Select the region of your S3 bucket

QUEUE NAME

N/A

The SQS queue name.

ACCOUNT NUMBER

N/A

The account number for the SQS queue and S3 bucket.

QUEUE ACCESS KEY ID

N/A

This is the 20 character ID associated with your Amazon IAM account.

QUEUE SECRET ACCESS KEY

N/A

This is the 40 character access key associated with your Amazon IAM account.

SOURCE DELETION OPTION

N/A

Whether to delete source files after they have been transferred to Chronicle. This reduces storage costs. Valid values are:

  • SOURCE_DELETION_NEVER: Never delete files from the source.

  • SOURCE_DELETION_ON_SUCCESS:Delete files and empty directories from the source after successful ingestion.

  • SOURCE_DELETION_ON_SUCCESS_FILES_ONLY:Delete files from the source after successful ingestion.

S3 BUCKET ACCESS KEY ID

No

This is the 20 character ID associated with your Amazon IAM account. Only specify if using a different access key for the S3 bucket.

S3 BUCKET SECRET ACCESS KEY

No

This is the 40 character access key associated with your Amazon IAM account. Only specify if using a different access key for the S3 bucket.

ASSET NAMESPACE

No

To assign an asset namespace to all events that are ingested from a particular feed, set the "namespace" field within details. The namespace field is a string.

Integration via Amazon Data Firehose:

Configure Amazon Data Firehose on Google Chronicle instance and copy a Endpoint URL & Secret key.