Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 5 Next »

About the Device

Log files are the records that Linux stores for administrators to keep track and monitor important events about the server, kernel, services, and applications running on it. In this post, we’ll go over the top Linux log files server administrators should monitor.

What are Linux log files?

Log files are a set of records that Linux maintains for the administrators to keep track of important events. They contain messages about the server, including the kernel, services and applications running on it.

Linux provides a centralized repository of log files that can be located under the  /var/log directory.

The log files generated in a Linux environment can typically be classified into four different categories:

  •     Application Logs

  •     Event Logs

  •     Service Logs

  •     System Logs

Why monitor Linux log files?

Log management is an integral part of any server administrator’s responsibility.

By monitoring Linux log files, you can gain detailed insight on server performance, security, error messages and underlying issues by. If you want to take a proactive vs. a reactive approach to server management, regular log file analysis is 100% required.

Device Information

 Entity

Particulars

Vendor Name

Amazon Web Services

Product Name

Linux

Type of Device

Cloud

Collection Method

Log Type

 Ingestion label

Preferred Logging Protocol - Format

Log Collection Method

Linux Auditing System (AuditD)

AUDITD

Syslog KV/Unstructured

C2C

Unix system

NIX_SYSTEM

Syslog Unstructured

C2C

Device Configuration

Pre-requisites:

An AWS account that you can sign in to.

A user who's a Global Administrator or AWS EC2, System Manager, CloudWatch, IAM Administrator.

To configure Linux Audit System (AUDITD) service logging in Linux OS

The below steps are validated on following Linux distributions - Ubuntu v22.04.4 LTS, RHEL v9.3, Debian v12.5 and SUSE Linux Enterprise v15.5.

  1. Login into Linux CLI with root or similar privileges.

  2. Deploy and enable the audit daemon and the audit dispatching framework by running the following command.

If you have already deployed the daemon and framework, you can skip this step.

For Debian and Ubuntu OS - 
sudo apt-get install auditd audispd-plugins

For RedHat OS - 
sudo yum install audit audispd-plugins

For SUSE OS
sudo zypper install audit audit-audispd-plugins

Enable auditd service

sudo systemctl enable auditd.service
  1. To enable logging of all commands, which include the user and root, add the following lines to /etc/audit/rules.d/audit.rules:

-a exit,always -F arch=b64 -S execve
-a exit,always -F arch=b32 -S execve
  1. Verify that the parameters in the /etc/audit/plugins.d/syslog.conf file match the following values:

active = yes
direction = out
path = /sbin/audisp-syslog
type = always
args = LOG_LOCAL6
format = string
  1. In case of SUSE, we need to enable RSYSLOG traditional file format which gives timestamp in the expected format.

    1. Please open /etc/rsyslog.conf

    2. Comment out this one line: $ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat

image-20240612-164056.png
  1. Restart auditd and rsyslog service.

systemctl restart auditd.service
systemctl restart rsyslog.service

To Configure Unified CloudWatch Agent to fetch logs from the EC2 Linux instances 

  1. Setup IAM instance profile roles for Systems Manager

    1. Create IAM Role

      image-20240606-114736.png
    2. Attach all the policies displayed in below image and save the Role.

      image-20240606-114804.png

    3. Attach the Role created to Linux EC2 server machine(s).

image-20240613-113926.png
  1. To install Cloud Watch Agent on Linux server, 

    1. Navigate to AWS Systems Manager > Run Command

    2. Select AWS-ConfigureAWSPackage

      image-20240606-115040.png
    3. Add AmazonCloudWatchAgent as Name in Command Parameters.

      image-20240606-115113.png

    4. Select target Linux machine(s) where you want to install AWS CloudWatch Agent.

      image-20240613-084720.png

    5. Scroll down and click Run.

      image-20240613-085050.png

    6. Once the command executes, select the instance id and view the output. There should not be any error in output.

image-20240613-085257.png
  1. Creating Configurations file and saving it to AWS Systems Parameter Store.

    1. Log in to the Instance which has been selected as the target for CloudWatchAgent installation in the previous step.

    2. Start terminal and launch /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-config-wizard

    3. Provide the response to every query asked on the config_wizard as shown in below screenshots.

Only select “YES” for CollectD if it is installed otherwise the configuration file will be failed.

image-20240606-115751 (1).png

d. In case of Linux, Adaptive MxDR supports /var/log/secure and /var/log/messages files so provide Log file path twice and provide appropriate Log group name as well.

e. In case of SUSE, Adaptive MxDR supports /var/log/messages only.

image-20240606-115925.pngimage-20240606-115949.png
  1. Once done, the Systems Manager Parameter store can be checked for the newly created configuration file with the parameters provided as an input.

    1. Navigate to AWS Systems Manager > Parameter Store

    2. Copy the newly created Parameter Store name.

image-20240613-085751.png
  1. Start the Agent while pointing to the configuration file using AWS Systems Manager.

    1. In order to start the newly configured Cloud Watch Agents on the instances, use Run Command.

    2. Navigate to AWS Systems Manager > Run Command

    3. Navigate to next page and search for AmazonCloudWatch-ManageAgent using the search box.

image-20240606-120119.png

d. Specify the name of newly created Parameter Store Configuration file.

image-20240606-120144.png

e. Select target (Linux Machines) where you want to run the command. Scroll down and click Run.

image-20240613-090214.png

f. Retain default values for Other parameters, i.e., timeout time is kept 600 Seconds.

image-20240606-120252.png

g. Retain default values for Rate Control.

image-20240606-120322.png

h. Currently, we're refraining from sending any events to the S3 bucket. Please leave it unchecked.

image-20240606-120353.png

i. SNS notifications are presently disabled, but once enabled, they can be configured with the details of the SNS. Please leave it unchecked.

image-20240606-120429.png

j. AWS command line interface command remains unchanged, use the default value.

image-20240613-091318.png

k. Once you've finished configuring all the settings mentioned above, click RUN. This will prompt the Systems Manager to initiate the CloudWatch Agent on the instance, initiating the log Flow.

image-20240613-091419.png
  1. Navigate to CloudWatch > Logs to see the logs getting collected. The newly created Log Groups will automatically be created once the logs are getting shipped through the instance by the agents.

  2. Please refer to the following page to check required IAM user policies: Accenture MDR Quick Start Guide in Configuring IAM User and KMS Key Policies

  3. Below are the URL details which we need to allow for connectivity (Please identify URLs by referring AWS document according to your services and regions):
    IAM: For any logging source IAM URL should be allowed
    https://docs.aws.amazon.com/general/latest/gr/iam-service.html

Cloudwatch: For cloudwatch logging source, cloudwatch URL should be allowed.
https://docs.aws.amazon.com/general/latest/gr/cwl_region.html

  1. Please note that if the CyberHub is installed on AWS, please select Application running on an AWS compute service while creating access key.

image-20240606-120841.png

As Adaptive MxDR support logging through S3 or SQS, we have to export logs from cloudwatch to S3. Please follow below page to automate continuous export of logs from cloudwatch loggroup to S3 bucket using lambda.

CloudWatch Log Group to S3 Bucket Export Automation using Lambda

Once logs are available in the S3 bucket, If you wish to configure SQS, please follow below steps:

  1. Follow the guide provided above for log export from Cloudwatch log group to S3 bucket.

  2. Create SQS and attach it with S3. Please refer: Configuring AWS Simple Queue Service (SQS) with S3 Storage

Important Links:

Integration Parameters

S3:

Property

Default Value

Description

REGION

N/A

Select the region of your S3 bucket

S3 URI

N/A

The S3 URI to ingest. (It will be a combination of S3 bucket name and prefix. Eg- S3://<S3 bucket name>/<prefix>)

URI IS A

N/A

The type of file indicated by the URI. Valid values are:

  • FILES: The URI points to a single file which will be ingested with each execution of the feed.

  • FOLDERS: The URI points to a directory. All files contained within the directory will be ingested with each execution of the feed.

  • FOLDERS_RECURSIVE: The URI points to a directory. All files and directories contained within the indicated directory will be ingested, including all files and directories within those directories, and so on.

SOURCE DELETION OPTION

N/A

Whether to delete source files after they have been transferred to Chronicle. This reduces storage costs. Valid values are:

  • SOURCE_DELETION_NEVER: Never delete files from the source.

  • SOURCE_DELETION_ON_SUCCESS:Delete files and empty directories from the source after successful ingestion.

  • SOURCE_DELETION_ON_SUCCESS_FILES_ONLY:Delete files from the source after successful ingestion.

ACCESS KEY ID

N/A

This is the 20 character ID associated with your Amazon IAM account.

SECRET ACCESS KEY

N/A

This is the 40 character access key associated with your Amazon IAM account.

SQS:

Property

Default Value

Description

REGION

N/A

Select the region of your S3 bucket

QUEUE NAME

N/A

The SQS queue name.

ACCOUNT NUMBER

N/A

The account number for the SQS queue and S3 bucket.

QUEUE ACCESS KEY ID

N/A

This is the 20 character ID associated with your Amazon IAM account.

QUEUE SECRET ACCESS KEY

N/A

This is the 40 character access key associated with your Amazon IAM account.

SOURCE DELETION OPTION

N/A

Whether to delete source files after they have been transferred to Chronicle. This reduces storage costs. Valid values are:

  • SOURCE_DELETION_NEVER: Never delete files from the source.

  • SOURCE_DELETION_ON_SUCCESS:Delete files and empty directories from the source after successful ingestion.

  • SOURCE_DELETION_ON_SUCCESS_FILES_ONLY:Delete files from the source after successful ingestion.

  • No labels