<< Click to Display Table of Contents >>

Navigation:  Program use > Configuration > File settings >

File data source

About log files

 

A log file is a file containing records about events in the chronological order.

 

Logging means the chronological recording of data with a varied (customizable) level of details about systems events (errors, warnings, messages). Usually, the data is saved to a file.

 

Examining the contents of an error log file after failures often makes it possible to understand what causes them. Old hardware and software systems use log files to save and store data.

 

Purpose

 

The program is used to monitor folders (directories) with log files or separate log files in real-time. Once the program detects new data in the log file, it can send notifications to the administrator or export and archive data from log files. The built-in script and filter tools allow you to single out only the events you are interested in from one or several log files. It considerably decreases the load on the administrator whose job is to maintain several web servers or data servers.

 

A good example of how to use the program is monitoring a computer where many users can create, copy, and edit a file. Log files allow you to track all changes, and the administrator can control all operations using the log file. With our program, the administrator can create event types that are to be detected (for example, deleting a file or creating a file with a certain name) and receive an immediate notification about this event as a desktop or e-mail message. The program becomes even more useful if you need to control several servers simultaneously.

 

How the program works

 

Once started, the program analyzes the list of folders specified in the configuration checking if these folders exist. If a folder exists, it is added to the scan list; otherwise, it is skipped. Once the data source is started, the file list is also filled with the initial file size values. Then the program is switched into one of the scan modes specified in the configuration:

 

1."Simple" – simple scan mode. In this mode, the program analyzes the folders and subfolders specified in the configuration for changes in the file.

2."Shell" – the shell mode uses operating system events when files and folders are created or modified.

 

If the program is configured to read data at startup, after the initial processing, it will read data from the files of the corresponding folders and subfolders and prepare this data for exporting, archiving or sending notifications.

 

The program monitors new files and folders and starts processing them in both modes. The program also monitors deleted files and folders and stops processing them.

 

If a file has just been created or the size of an existing file has changed, the program reads this data from the file and passes them on for further processing. The parameters specified in the configuration are used to read data (see "File settings"). If there is a delay set in the configuration before reading the file and the file changes during this delay, the program delays reading the file for the time of the delay specified in the configuration. If the file changes again, the reading will be delayed and so on until the file stops changing. After the program reads the file, it performs one of the following three operations with it:

 

1.The file is deleted;

2.The file is cleared, i.e., the file size becomes equal to zero;

3.The file is not modified.

 

The program takes into account the file mask specified in the configuration, i.e., if "*.txt" is specified in the configuration, the program will scan and read data only from a text file – it will ignore all other files. The program also uses the minimum file size value from the configuration during the scanning process. If the file size is less than the specified value, the program does not read the file till the file size is equal to or larger than the specified value.

 

The program reads data from the file by blocks whose size is specified in the configuration. The larger the block is, the fewer times the program accesses the disk and the better the performance is, but the requirements to the computer performance are higher.