AWS Linux

About the Device

Log files are the records that Linux stores for administrators to keep track and monitor important events about the server, kernel, services, and applications running on it. In this post, we’ll go over the top Linux log files server administrators should monitor.

What are Linux log files?

Log files are a set of records that Linux maintains for the administrators to keep track of important events. They contain messages about the server, including the kernel, services and applications running on it.

Linux provides a centralized repository of log files that can be located under the  /var/log directory.

The log files generated in a Linux environment can typically be classified into four different categories:

  •     Application Logs

  •     Event Logs

  •     Service Logs

  •     System Logs

Why monitor Linux log files?

Log management is an integral part of any server administrator’s responsibility.

By monitoring Linux log files, you can gain detailed insight on server performance, security, error messages and underlying issues by. If you want to take a proactive vs. a reactive approach to server management, regular log file analysis is 100% required.

Device Information

 Entity

Particulars

 Entity

Particulars

Vendor Name

Amazon Web Services

Product Name

Linux

Type of Device

Cloud

Collection Method

Log Type

 Ingestion label

Preferred Logging Protocol - Format

Log Collection Method

Log Type

 Ingestion label

Preferred Logging Protocol - Format

Log Collection Method

Linux Auditing System (AuditD)

AUDITD

Syslog KV/Unstructured

C2C

Unix system

NIX_SYSTEM

Syslog Unstructured

C2C

Device Configuration

Pre-requisites:

An AWS account that you can sign in to.

A user who's a Global Administrator or AWS EC2, System Manager, CloudWatch, IAM Administrator.

We support two methods to ingest logs into Chronicle from AWS Security Hub. The Lambda+S3+SQS method is preferred, while the Data Firehose method is secondary.

Based on our observations, Firehose generally incurs higher costs.

Method 1: Lambda+S3+SQS

To configure Linux Audit System (AUDITD) service logging in Linux OS

The below steps are validated on following Linux distributions - Ubuntu v22.04.4 LTS, RHEL v9.3, Debian v12.5 and SUSE Linux Enterprise v15.5.

  1. Login into Linux CLI with root or similar privileges.

  2. Deploy and enable the audit daemon and the audit dispatching framework by running the following command.

If you have already deployed the daemon and framework, you can skip this step.

For Debian and Ubuntu OS - sudo apt-get install auditd audispd-plugins For RedHat OS - sudo yum install audit audispd-plugins For SUSE OS sudo zypper install audit audit-audispd-plugins

Enable auditd service

sudo systemctl enable auditd.service
  1. To enable logging of all commands, which include the user and root, add the following lines to /etc/audit/rules.d/audit.rules:

-a exit,always -F arch=b64 -S execve -a exit,always -F arch=b32 -S execve
  1. Verify that the parameters in the /etc/audit/plugins.d/syslog.conf file match the following values:

  1. In case of SUSE, we need to enable RSYSLOG traditional file format which gives timestamp in the expected format.

    1. Please open /etc/rsyslog.conf

    2. Comment out this one line: $ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat

image-20240612-164056.png
  1. Restart auditd and rsyslog service.

 

To Configure Unified CloudWatch Agent to fetch logs from the EC2 Linux instances 

  1. Setup IAM instance profile roles for Systems Manager

    1. Create IAM Role

      image-20240606-114736.png
    2. Attach all the policies displayed in below image and save the Role.

    3. Attach the Role created to Linux EC2 server machine(s).

  1. To install Cloud Watch Agent on Linux server, 

    1. Navigate to AWS Systems Manager > Run Command

    2. Select AWS-ConfigureAWSPackage

    3. Add AmazonCloudWatchAgent as Name in Command Parameters.

    4. Select target Linux machine(s) where you want to install AWS CloudWatch Agent.

    5. Scroll down and click Run.

    6. Once the command executes, select the instance id and view the output. There should not be any error in output.

  1. Creating Configurations file and saving it to AWS Systems Parameter Store.

    1. Log in to the Instance which has been selected as the target for CloudWatchAgent installation in the previous step.

    2. Start terminal and launch /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-config-wizard

    3. Provide the response to every query asked on the config_wizard as shown in below screenshots.

Only select “YES” for CollectD if it is installed otherwise the configuration file will be failed.

d. In case of Linux, Adaptive MxDR supports /var/log/secure and /var/log/messages files so provide Log file path twice and provide appropriate Log group name as well.

e. In case of SUSE, Adaptive MxDR supports /var/log/messages only.

  1. Once done, the Systems Manager Parameter store can be checked for the newly created configuration file with the parameters provided as an input.

    1. Navigate to AWS Systems Manager > Parameter Store

    2. Copy the newly created Parameter Store name.

  1. Start the Agent while pointing to the configuration file using AWS Systems Manager.

    1. In order to start the newly configured Cloud Watch Agents on the instances, use Run Command.

    2. Navigate to AWS Systems Manager > Run Command

    3. Navigate to next page and search for AmazonCloudWatch-ManageAgent using the search box.

d. Specify the name of newly created Parameter Store Configuration file.

e. Select target (Linux Machines) where you want to run the command. Scroll down and click Run.

f. Retain default values for Other parameters, i.e., timeout time is kept 600 Seconds.

g. Retain default values for Rate Control.

h. Currently, we're refraining from sending any events to the S3 bucket. Please leave it unchecked.

i. SNS notifications are presently disabled, but once enabled, they can be configured with the details of the SNS. Please leave it unchecked.

j. AWS command line interface command remains unchanged, use the default value.

k. Once you've finished configuring all the settings mentioned above, click RUN. This will prompt the Systems Manager to initiate the CloudWatch Agent on the instance, initiating the log Flow.

  1. Navigate to CloudWatch > Logs to see the logs getting collected. The newly created Log Groups will automatically be created once the logs are getting shipped through the instance by the agents.

  2. Please refer to the following page to check required IAM user policies:

  3. Below are the URL details which we need to allow for connectivity (Please identify URLs by referring AWS document according to your services and regions):
    IAM: For any logging source IAM URL should be allowed
    https://docs.aws.amazon.com/general/latest/gr/iam-service.html

Cloudwatch: For cloudwatch logging source, cloudwatch URL should be allowed.
https://docs.aws.amazon.com/general/latest/gr/cwl_region.html

  1. Please note that if the CyberHub is installed on AWS, please select Application running on an AWS compute service while creating access key.

Once logs are available in the S3 bucket, If you wish to configure SQS, please follow below steps:

  1. Follow the guide provided above for log export from Cloudwatch log group to S3 bucket.

  2. Create SQS and attach it with S3. Please refer:

Method 2: Amazon Data Firehose

Please find more details on Firehose Configuration on the below page:

Important Links:

  • For more details on how to get required credentials for integration parameters please refer:

  • Please refer below page to check required IAM user and KMS Key policies for S3, SQS and KMS:

  • Below are the URL details which we need to allow for connectivity (Please identify URLs by referring AWS document according to your services and regions):

    • IAM: For any logging source IAM URL should be allowed:

    • S3: For S3 or SQS logging source, S3 URL should be allowed:

    • SQS: For SQS logging source, SQS URL should be allowed:

Integration Parameters

SQS:

Parameter

Default Value

Description

Parameter

Default Value

Description

REGION

N/A

Select the region of your S3 bucket

QUEUE NAME

N/A

The SQS queue name.

ACCOUNT NUMBER

N/A

The account number for the SQS queue and S3 bucket.

QUEUE ACCESS KEY ID

N/A

This is the 20 character ID associated with your Amazon IAM account.

QUEUE SECRET ACCESS KEY

N/A

This is the 40 character access key associated with your Amazon IAM account.

SOURCE DELETION OPTION

N/A

Whether to delete source files after they have been transferred to Chronicle. This reduces storage costs. Valid values are:

  • SOURCE_DELETION_NEVER: Never delete files from the source.

  • SOURCE_DELETION_ON_SUCCESS:Delete files and empty directories from the source after successful ingestion.

  • SOURCE_DELETION_ON_SUCCESS_FILES_ONLY:Delete files from the source after successful ingestion.

S3 BUCKET ACCESS KEY ID

No

This is the 20 character ID associated with your Amazon IAM account. Only specify if using a different access key for the S3 bucket.

S3 BUCKET SECRET ACCESS KEY

No

This is the 40 character access key associated with your Amazon IAM account. Only specify if using a different access key for the S3 bucket.

ASSET NAMESPACE

No

To assign an asset namespace to all events that are ingested from a particular feed, set the "namespace" field within details. The namespace field is a string.

Integration via Amazon Data Firehose:

Configure Amazon Data Firehose on Google Chronicle instance and copy a Endpoint URL & Secret key.

About Accenture:
Accenture is a leading global professional services company that helps the world’s leading businesses, governments and other organizations build their digital core, optimize their operations, accelerate revenue growth and enhance citizen services—creating tangible value at speed and scale. We are a talent and innovation led company with 738,000 people serving clients in more than 120 countries. Technology is at the core of change today, and we are one of the world’s leaders in helping drive that change, with strong ecosystem relationships. We combine our strength in technology with unmatched industry experience, functional expertise and global delivery capability. We are uniquely able to deliver tangible outcomes because of our broad range of services, solutions and assets across Strategy & Consulting, Technology, Operations, Industry X and Accenture Song. These capabilities, together with our culture of shared success and commitment to creating 360° value, enable us to help our clients succeed and build trusted, lasting relationships. We measure our success by the 360° value we create for our clients, each other, our shareholders, partners and communities. Visit us at www.accenture.com.

About Accenture Security
Accenture Security is a leading provider of end-to-end cybersecurity services, including advanced cyber defense, applied cybersecurity solutions and managed security operations. We bring security innovation, coupled with global scale and a worldwide delivery capability through our network of Advanced Technology and Intelligent Operations centers. Helped by our team of highly skilled professionals, we enable clients to innovate safely, build cyber resilience and grow with confidence. Follow us @AccentureSecure on Twitter or visit us at www.accenture.com/security.

Legal notice: Accenture, the Accenture logo, and other trademarks, service marks, and designs are registered or unregistered trademarks of Accenture and its subsidiaries in the United States and in foreign countries. All trademarks are properties of their respective owners. This document is intended for general informational purposes only and does not take into account the reader’s specific circumstances, and may not reflect the most current developments. Accenture disclaims, to the fullest extent permitted by applicable law, any and all liability for the accuracy and completeness of the information in this presentation and for any acts or omissions made based on such information. Accenture does not provide legal, regulatory, audit, or tax advice. Readers are responsible for obtaining such advice from their own legal counsel or other licensed professionals.