Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Table of Contents
minLevel1
maxLevel63
outlinefalse
styledisc
typelist
printabletrue

...

We support two methods to ingest logs into Chronicle from AWS Security Hub. The Lambda+S3+SQS method is preferred, while the Data Firehose method is secondary.

Based on our observations, Firehose generally incurs higher costs.

Device Configuration

Method 1: Lambda+S3+SQS

Please follow below steps to forward security hub findings to Cloudwatch log group:

...

 Once logs are available in the S3 bucket, If you wish to please configure SQS, please follow below stepsstep:

  1. Follow the guide provided above for log export from Cloudwatch log group to S3 bucket.

  2. Create SQS and attach it with S3. Please refer: Configuring AWS Simple Queue Service (SQS) with S3 Storage

...

Method 2: Amazon Data Firehose

Please find more details on Firehose Configuration on the below page:

Amazon Data Firehose to Chronicle: C2C-Push

Important Links:

Integration parameters

S3SQS:

Property

Default Value

 

Parameter Display Name

Required

Description

REGION

N/A

Yes

Select the region of your S3 bucket

S3 URI

N/A

The S3 URI to ingest. (It will be a combination of S3 bucket name and prefix. Eg- S3://<S3 bucket name>/<prefix>)

URI IS A

N/A

The type of file indicated by the URI. Valid values are:

  • FILES: The URI points to a single file which will be ingested with each execution of the feed.

  • FOLDERS: The URI points to a directory. All files contained within the directory will be ingested with each execution of the feed.

  • FOLDERS_RECURSIVE: The URI points to a directory. All files and directories contained within the indicated directory will be ingested, including all files and directories within those directories, and so on

    QUEUE NAME

    Yes

    The SQS queue name.

    ACCOUNT NUMBER

    Yes

    The account number for the SQS queue and S3 bucket.

    QUEUE ACCESS KEY ID

    Yes

    This is the 20 character ID associated with your Amazon IAM account.

    QUEUE SECRET ACCESS KEY

    Yes

    This is the 40 character access key associated with your Amazon IAM account.

    SOURCE DELETION OPTION

    N/A

    Yes

    Whether to delete source files after they have been transferred to Chronicle. This reduces storage costs. Valid values are:

    • SOURCE_DELETION_NEVER: Never delete files from the source.

    • SOURCE_DELETION_ON_SUCCESS:Delete files and empty directories from the source after successful ingestion.

    • SOURCE_DELETION_ON_SUCCESS_FILES_ONLY:Delete files from the source after successful ingestion.

    S3 BUCKET ACCESS KEY ID

    N/A

    No

    This is the 20 character ID associated with your Amazon IAM account.

    SECRET ACCESS KEY

    N/A

    This is the 40 character access key associated with your Amazon IAM account.

    SQS:

    Property

    Default Value

     

    Description

    REGION

    N/A

    Select the region of your S3 bucket

    QUEUE NAME

    N/A

    The SQS queue name.

    ACCOUNT NUMBER

    N/A

    The account number for the SQS queue and S3 bucket.

    QUEUE ACCESS KEY ID

    N/A

    This is the 20 character ID associated with your Amazon IAM account.

    QUEUE SECRET ACCESS KEY

    N/A

    Only specify if using a different access key for the S3 bucket.

    S3 BUCKET SECRET ACCESS KEY

    No

    This is the 40 character access key associated with your Amazon IAM account. Only specify if using a different access key for the S3 bucket.

    SOURCE DELETION OPTION

    N/A

    Whether to delete source files after they have been transferred to Chronicle. This reduces storage costs. Valid values are:

  • SOURCE_DELETION_NEVER: Never delete files from the source.

  • SOURCE_DELETION_ON_SUCCESS:Delete files and empty directories from the source after successful ingestion.

  • SOURCE_DELETION_ON_SUCCESS_FILES_ONLY:Delete files from the source after successful ingestion

    ASSET NAMESPACE

    No

    To assign an asset namespace to all events that are ingested from a particular feed, set the "namespace" field within details. The namespace field is a string.

    Integration via Amazon Data Firehose:

    Configure Amazon Data Firehose on Google Chronicle instance and copy a Endpoint URL & Secret key.