(For example: C\:\\Program Files\\XpoLogCenter4.5\\)
Optional Parameters:
USER_INPUT_AGENT_MODE="Agent Mode Active" (use "Agent Mode Not Active" for a regular installation)
USER_INPUT_START_XPOLOG=1 (use 0 to prevent XpoLog from starting once the installation completes)
USER_INPUT_SERVICE_NAME=XpoLogCenter (use a different service name if needed)
- Note: ensure there are no spaces at the end of any of the lines in the file installer.properties
3. Execute command: XpoLogCenterSetup.exe -f installer.properties
4. XpoLog will be installed in the background and will be started automatically, unless you specified otherwise during installation.
Part Three: Running Your New Software
XpoLog starts automatically after installation. There is a single process - XpoLogCenter, which is presented in the Windows Services panel. It is
highly recommended to configure a user on the XpoLogCenter service, which XpoLog can use while reading logs from machines in the
environment.
To run your new software:
1. Recommended: In the Windows Services panel, under the log on tab, type a username that XpoLog service can
use in order to connect and read logs over the Windows network authentication in your organization.
2. Open a browser with the following URL: http://MACHINE_NAME:30303 and you will be redirected to XpoLog Center homepage.
Installing on Linux/Solaris
You may want to print these instructions. When you are ready to install, go to XpoLog site www.xplg.com to the Download Center.
Part One: Downloading Your New Software
To download your new software:
1. On the XpoLog download page, click the link or button for the product that you want to download.
2. When you are done, click the Download link or the Download Now button.
Note: Be sure to click the Download Now button for the product that you want to install.
3. Do one of the following:
If the software is downloaded automatically into a default folder, you see a download progress dialog box. Make a note of the
directory (folder) and filename for the software that you are downloading.
If a "Save As" dialog box appears, choose a folder and filename for the software that you are downloading.
Part Two: Installing Your New Software
To install your new software:
After downloading is complete, you may see a "Download successful" dialog box. If you see this dialog box, click Install and go
on to step 3. Otherwise, continue with step 2.
1. (Skip this step if installation has already started.) Open the folder that the new software was downloaded into, copy it to the target
machine, and gunzip the file that you downloaded. For instance, if you downloaded XpoLog Center, gunzip the downloaded file (gunzip
XpoLogCenterSetup.bin.gz | for x64 gunzip XpoLogCenterSetup-64.bin.gz).
2. After unzipping the file, execute the .bin file (sh XpoLogCenterSetup.bin | for x64 sh XpoLogCenterSetup-64.bin).
The installation wizard will start.
3. When the XpoLog Setup message appears, read the information and follow the installation process.
4. Read the instructions on each step.
5. When you see the prompt telling you that installation is complete, XpoLog will be started automatically.
Important: At any step, you can click cancel to quit the installation.
Silent Mode:
1.1. Open the folder that the new software was downloaded into, copy it to the target machine, and gunzip the file that you downloaded. For
instance, if you downloaded XpoLog Center, gunzip the downloaded file (gunzip XpoLogCenterSetup.bin.gz | for x64 gunzip
XpoLogCenterSetup-64.bin.gz).
2. In the folder that the new software was downloaded into create a file installer.properties, with the following contents:
Mandatory Parameters:
INSTALLER_UI=SILENT
USER_INSTALL_DIR=
(For example:
/apps/XpoLogCenter/)
Optional Parameters:
USER_INPUT_AGENT_MODE="Agent Mode Active" (use "Agent Mode Not Active" for a regular installation)
USER_INPUT_START_XPOLOG_CONSOLE="Yes" (use "No" to prevent XpoLog from starting once the
installation is complete)
- Note: ensure there are no spaces at the end of any of the lines in the file installer.properties
3. Execute command: sh XpoLogCenterSetup.bin -f installer.properties.
4. XpoLog will be installed in the background and will be started automatically, unless you specified otherwise during installation.
Part Three: Running Your New Software
XpoLog starts automatically after installation. There are several useful commands for starting, stopping, restarting, or finding out the status of the
server:
cd to XPOLOG_HOME
Run: ''sh runXpoLog.sh start|stop|restart|stat''
start = starting XpoLog
stop = stopping XpoLog
restart = restarting XpoLog
stat = finding out whether XpoLog is running or not
To run your new software:
1. Go to the installation directory.
2. XpoLog should start automatically after installation. To control and verify, use these commands:
Start XpoLog – sh /…/XpoLogCenter/runXpoLog start
Stop XpoLog – sh /…/XpoLogCenter/runXpoLog stop
Check Status – sh /…/XpoLogCenter/runXpoLog stat
3. Open a browser with the following URL: http://MACHINE_NAME:30303.
You will be redirected to the XpoLog Center homepage.
Installing on UNIX without internal JVM (JAVA Provided Externally)
You may want to print these instructions. When you are ready to install, go to XpoLog site www.xpolog.com to the Download Center.
This XpoLog Installer does not contain JAVA and uses an external JAVA which should already be available on the machine that you plan to run
XpoLog on. Important: JAVA 1.6+ is required.
Part One: Downloading Your New Software
To download your new software:
1. On the XpoLog download page, click the link or button for the product that you want to download (without internal JVM).
2. Log in to the target machine with the same user you plan to run XpoLog with to ensure JAVA is available. Run the command ''java
-version'' and confirm the output is JAVA 1.6+
If the java command does not return result or the JAVA version is prior to JAVA 1.6 please contact XpoLog Support for additional
information.
3. Do one of the following:
If the software is downloaded automatically into a default folder, you see a download progress dialog box. Make a note of the
directory (folder) and filename for the software that you are downloading.
If a "Save As" dialog box appears, choose a folder and filename for the software that you are downloading.
Part Two: Installing Your New Software
To install your new software:
After downloading is complete, you may see a "Download successful" dialog box. If you see this dialog box, click Install and go on to step
3. Otherwise, continue with step 2.
1.1. (Skip this step if installation has already started.) Open the folder that the new software was downloaded into, copy it to the target
machine, and gunzip the file that you downloaded. For instance, if you downloaded XpoLog Center, gunzip the downloaded file (gunzip
XpoLogCenterSetupNoJava.bin.gz).
2. After unzipping the file, execute the .bin file (sh XpoLogCenterSetup.bin).
The installation wizard will start.
Note: the installation wizard looks for a local JAVA to be used, it is possible to specify the full JAVA path that should be used, by
executing the installation with a specified JAVA full path (sh XpoLogCenterSetupNoJava.bin LAX_VM
"/FULL/PATH/TO/JAVA/EXECUTABLE")
3. When the XpoLog Setup message appears, read the information and follow the installation process.
4. Read the instructions on each step.
5. When you see the prompt telling you that installation is complete, XpoLog will be started automatically, unless you specified otherwise
during installation.
Important: At any step, you can click cancel to quit the installation.
Silent Mode:
1. Open the folder that the new software was downloaded into,copy it to the target machine, and gunzip the file that you downloaded. For
instance, if you downloaded XpoLog Center, gunzip the downloaded file (gunzip XpoLogCenterSetupNoJava.bin.gz or in case of x64
gunzip XpoLogCenterSetupNoJava-64.bin.gz).
2. In the folder that the new software was downloaded into create a file installer.properties, with the following contents:
Mandatory Parameters:
INSTALLER_UI=SILENT
USER_INSTALL_DIR= (For example: /apps/XpoLogCenter/)
Optional Parameters:
USER_INPUT_AGENT_MODE="Agent Mode Active" (use "Agent Mode Not Active" for a regular installation)
USER_INPUT_START_XPOLOG_CONSOLE="Yes" (use "No" to prevent XpoLog from starting once the installation is complete)
- Note: ensure there are no spaces at the end of any of the lines in the file installer.properties
3. Execute command: sh XpoLogCenterSetup.bin -f installer.properties
4. XpoLog will be installed in the background and will be started automatically.
Part Three: Running Your New Software
XpoLog starts automatically after installation. There are several useful commands for starting, stopping, restarting, or finding out the status of the
server:
cd to XPOLOG_HOME
Run: ''sh runXpolog.sh start|stop|restart|stat''
start = starting XpoLog
stop = stopping XpoLog
restart = restarting XpoLog
stat = finding out whether XpoLog is running or not
To run your new software:
1. Go to the installation directory.
2. XpoLog should start automatically after installation. To control and verify, use these commands:
Start XpoLog – sh /…/XpoLogCenter/runXpoLog start
Stop XpoLog – sh /…/XpoLogCenter/runXpoLog stop
Check Status – sh /…/XpoLogCenter/runXpoLog stat
3. Open a browser with the following URL: http://MACHINE_NAME:30303.
You will be redirected to the XpoLog Center homepage.
Deploying XpoLog as a Web Application
Part One: Downloading Your New Software
To download your new software:
1. On the XpoLog download page, click the link or button for the product that you want to download.
2. When you are done, click the Download link or the Download Now button.
Note: Be sure to click the Download Now button for the product that you want to install.
Part Two: Installing Your New Software
XpoLog can be deployed on most application servers. The deployment is standard, according to the application server that you are using.For specific information on how to deploy XpoLog on Apache Tomcat, IBM WebSphere, Oracle WebLogic, JBoss, or any other, please contact
XpoLog support team at support@xplg.com
Once XpoLog WAR has been deployed successfully, the context of XpoLog is ‘logeye’.
Part Three: Running Your New Software
Start/stop/restart the application on the application server on which you deployed XpoLog.
Note: XpoLog default context is ''logeye''.
Thank you for installing XpoLog Center.
Post Installation Recommendations
Configuring XpoLog to Storage
It is highly recommended to configure XpoLog to work against an external storage location / directory. XpoLog requires full permissions
(read/write) on this location with direct/fast access.
To configure XpoLog to storage:
1. Create a folder under the name “XpoLogConfig”.
2. Go to XpoLog > Settings > General.
3. Select the Use external configuration directory checkbox and type the absolute path into XpoLogConfig – “…/XpoLogConfig/”
XpoLog saves the information and requests a restart.
4. Restart XpoLog, and go once again to XpoLog > Settings > General, and ensure that the configuration was saved successfully.
XpoLog saves all the information into this external folder.
Note: It is recommended to back it up occasionally. If you remove your XpoLog version and redeploy, you can always point the new
XpoLog instance to this folder to use the existing configuration or for clustering purposes.
For further information, contact the support team at support@xplg.com
Allocating More Memory to XpoLog (64-bit installations)
It is highly recommended to install XpoLog on a 64 bit OS, which enables higher allocation of memory than the default.
To allocate more memory:
1. Stop XpoLog.
2. Edit the file /.../XPOLOG_INSTALL_DIR/XpoLog.lax (Windows) OR
/.../XPOLOG_INSTALL_DIR/XpoLog.sh.lax (Linux/Solaris).
3. Look for -Xmx1024m (default allocation is 1024 MB) and allocate more memory based on the available memory
of the machine. For example, to allocate 4096 MB change the value to be -Xmx4096m and save.
Note: it is recommended to allocate 75% of the machine''s memory.
4. Start XpoLog.
Setting a default Character Encoding (Optional)
In case XpoLog should be able to support special characters which are different from the machine''s default (Especially Chinese, Japanese,
Korean, etc.) it is recommended to modify the default encoding as follows.
To change default encoding:
1. Stop XpoLog.
2. Edit the file /.../XPOLOG_INSTALL_DIR/XpoLog.lax (Windows) OR /.../XPOLOG_INSTALL_DIR/XpoLog.sh.lax (Linux/Solaris).
3. Edit the parameter -Dfile.encoding=UTF-8 at the end of the LAX.NL.JAVA.OPTION.ADDITIONAL parameters line to any desired default
encoding.
Note: the encoding can be any of the JAVA supported encoding
4. Start XpoLog.
Setting a default locale (Optional)
In case XpoLog should be capable of displaying content in different languages and normalize dates from different regions, it is possible to assign
a JVM locale. By default, the JVM locale is the locale of the platform where XpoLog is installed. To override the default JVM locale, you must set
the appropriate language and region (country) arguments in XpoLog.
To change default locale:
1. Stop XpoLog.
2. Edit the file /.../XPOLOG_INSTALL_DIR/XpoLog.lax (Windows) OR /.../XPOLOG_INSTALL_DIR/XpoLog.sh.lax (Linux/Solaris).
3. Add the parameters -Duser.language=en_US -Duser.region=en_US at the end of the LAX.NL.JAVA.OPTION.ADDITIONAL parameters
line3.
Note: The above example is for English US locale, the complete locale list that JAVA supports can be found here: JAVA locale list
4. Start XpoLog.
Windows Specific - Assign a service account
After installation, XpoLog service is available under the Windows services panel (XpoLogCenter). It is highly
recommended, after installation, to assign an account on the service for optimized connectivity between XpoLog and
remote log sources over the Windows network.
To assign a service account:
1. Go to the Windows Services Panel.
2. Right click the XpoLogCenter service > Properties
3. Go to the ''Log On'' tab (by default, XpoLog is installed with a Local System Account). Select ''This account'' radio
button and enter a specific account with sufficient privileges that XpoLog can use to read remote log sources.
4. Save and restart.
This will allow Administrators adding logs over the Windows network as if they were local (direct access) using UNC
paths: \\\$\...\ (for example: \\server1\c$\logs\log4j.log{string})
Linux Specific - Allocating Allowed Open Files / Number of Processes
It is very important to allocate 10,000 allowed open files and allowed number of processes to XpoLog that runs on
Linux (default is usually 1024).
The allocation can be done specifically to the user who runs XpoLog:
To check the limitation for the user who runs XpoLog:
1. Open SSH terminal to XpoLog''s machine and log in using the same user that runs XpoLog (for example, Putty).
2. Run the command: ''ulimit -n'' and then the command: ''ulimit -u''
The recommended output should be 10000.
To allocate the required number of open files:
Log in to the machine that runs XpoLog, as superuser if needed, and edit the file /etc/security/limits.conf, by adding the following line:
[USER_THAT_RUNS_XPOLOG] - nofile [MAX_NUMBER_OF_FILES]
Where
[USER_THAT_RUNS_XPOLOG] is the user who you are using to run the XpoLog process (superuser, if you logged in as such).
[MAX_NUMBER_OF_FILES] is the new limitation that has to be set to 10000.
For instance, you can add the line: xpolog - nofile 10000
To allocate the required number of processes:
Log in to the machine that runs XpoLog, as superuser if needed, and edit the file /etc/security/limits.conf, by adding the following line:
[USER_THAT_RUNS_XPOLOG] - nproc [MAX_NUMBER_OF_PROCESSES]
Where
[USER_THAT_RUNS_XPOLOG] is the user who you are using to run the XpoLog process (superuser, if you logged in as such).
[MAX_NUMBER_OF_PROCESSES] is the new limitation that has to be set to 10000.
For instance, you can add the line: xpolog - nproc 10000
IMPORTANT: After making this change, log out and then log in again so that the changes take effect, verify by
getting 10000 as a result of running ''ulimit -n'' and ''ulimit -u'' again using the same user which runs XpoLog, and then
restart XpoLog.
Solaris Specific - Allocating Allowed Open Files
It is very important to allocate 10,000 allowed open files to XpoLog that runs on Solaris (default is usually 1024).
To check the limitation for the user who runs XpoLog:
1. Open SSH terminal to XpoLog''s machine and log in using the same user that runs XpoLog (for example, Putty).
2. Run the command: ''ulimit -a''
The recommended output should be 10000.
To allocate the required number of open files:
Log in to the machine that runs XpoLog, as superuser if needed, and edit the file /etc/security, by adding the following line:
set rlim_fd_max = [MAX_NUMBER_OF_FILES]
Where
[MAX_NUMBER_OF_FILES] is the new limitation that has to be set to 10000.
Once the above hard limit is set reboot the system once. You can then increase the value of this property explicitly (up to this limit) using
the following command:ulimit -n [MAX_NUMBER_OF_FILES]
IMPORTANT: After making this change, log out and then log in again so that the changes take effect, verify by getting 10000 as a result
of running ''ulimit -a'' again using the same user which is used to run XpoLog, and then restart XpoLog.
WAR Deployment Specific - Configuring XpoLog to Storage
It is highly recommended to configure XpoLog that is deployed as a war on an application server to an external configuration directory (storage) -
instructions are available at the top of this page.
In case you need to update your XpoLog version, the war file will be replaced and if an external storage is not
configured all the data and configuration will be removed.
Advanced Installation Procedures
This topic discusses advanced installation procedures of XpoLog.
Please read it carefully before installing XpoLog.
The following advanced installations can be performed:
Installing a cluster of several XpoLog instances to process larger volumes of data (see XpoLog Cluster Installation)
Installing and using XpoLog to XpoLog deployment – usage of remote XpoLog instances scenarios (see Remote XpoLog Installation)
XpoLog Cluster Installation
General
When deploying XpoLog Center in busy environments, it is recommended to deploy several XpoLog Center instances as a cluster, for high
availability and load balancing.
The XpoLog Center cluster is composed of several instances, using a common storage in order to share the system tasks load and users’
activity. Some of the instances function as processor nodes, taking care of back-end tasks (indexing, analysis, monitoring, and more), while the
rest of the instances function as UI nodes. This architecture enables easy scaling of XpoLog Center in heavily loaded environments, without
influencing the users’ front-end experience. A load balancer can be used if more than one UI node is deployed.
It is highly recommended to consult with XpoLog support prior to setting up the clustered environment. Review the System Architecture diagrams t
hat explain the XpoLog Center Cluster architecture, and below a step-by-step cluster deployment instructions.
XpoLog Center Cluster Deployment Instructions
The following are instructions for installing XpoLog Center in a clustered environment, with two UI nodes and a single processor node.
Preparations
1. Decide if XpoLog Center will be installed in a Windows or Linux environment. In case there is a need to analyze log data from
Windows machines, XpoLog Center must be installed on Windows machines.
2. Prepare 2+ machines (physical or virtual); one for the UI nodes and one for the processor node, based on the XpoLog Center
hardware requirements.
3. Prepare a shared storage device that can be accessed by all XpoLog Center nodes, based on the XpoLog Center hardware requirements. It is
mandatory that ALL XpoLog instances in the cluster will have full permissions (READ/WRITE) on the allocated shared storage.
Note: XpoLog Center performs heavy read/write operations. It is highly recommended that the fastest storage connectivity is allocated to the UI
node.
Installation
1. Download the XpoLog Center installer from the XpoLog website at http://www.xpolog.com
2. Run the installer on each node machine - See installation instructions for more details
3. Once completed, open a web browser directly to each node at: http://[NODE_HOST_NAME]:30303 to verify that XpoLog Center was installed
successfully.
Configuration
1. Create a folder that will store XpoLog Center’s data on the shared storage device (referred to
as EXTERNAL_CONFIGURATION_DIRECTORY).
2. Open a web browser to each node at http://[NODE_HOST_NAME]:30303, go to XpoLog > Settings > General, and do the following:
a. Select the Use external configuration directory checkbox.
b. Enter the full path to the EXTERNAL_CONFIGURATION_DIRECTORY in the ‘Configuration full path’ field. c. Select the Cluster Mode checkbox.
d. Click Save.
e. Wait until receiving a message that the configuration was saved successfully and a restart request but don’t restart XpoLog Center yet.
g. Under the Mail tab, specify the SMTP details and system administrator email address. XpoLog Center will send an alert in case the active
processor node changes.
3. On each node (starting with the processor node), go to XPOLOG_CENTER_INSTALLATION_DIRECTORY, edit the lax file (XpoLog.lax on
Windows installation; XpoLog.sh.lax on Linux installation), and perform the following changes to the line that starts with
lax.nl.java.option.additional=
a. By default, XpoLog Center is allocated with 1024 MB of memory. It is recommended to increase this value to about 75% of the machine’s
memory. To do so, replace -Xmx1024m with -XmxNEW_VALUE
b. In a clustered environment, each node should be assigned a unique name for it to be identified in the system. To do so, append the following
to the end of the line -Dxpolog.uid.structure=[NODE_NAME]
example node name: PROCESSOR1, PROCESSOR2, UI01, UI02, etc.
c. Save the file.
d. Restart XpoLog Center (on a Windows installation, restart the XpoLogCenter service; on a Linux installation, run the
script XPOLOG_CENTER_INSTALLATION_DIRECTORY/runXpoLog.sh restart).
4. In a clustered environment, some configuration properties should be tuned. To do so, open a web browser to the processor node at http://[PRO
CESSOR_NODE_HOST_NAME]:30303/logeye/support,
select ‘Advanced Settings’ in the select box.
For each of the following properties, enter the property name in the text box of the ‘Name’ column, right-click the row, click ‘Edit’, enter the custom
value and click Save:
Property name: cluster.allowedMasters
Purpose: used to determine the name of the processor node.
Should be customized: always
Custom value: [NODE_NAME_1],[NODE_NAME_2]...,[NODE_NAME_N] (of the processor nodes)
Property name: htmlUtil.BaseUrl
Purpose: used by the processor node when exporting information on the server side
Should be customized: always
Custom value: http://[PROCESSOR_NODE_HOST_NAME]:[PROCESSOR_NODE_PORT]/logeye/
Property name: htmlUtil.ui.BaseUrl
Purpose: used by the UI node when exporting information on the server side
Should be customized: always
Custom value: http://[UI_NODE_HOST_NAME]:[UI_NODE_PORT]/logeye/
Property name: mail.link.baseUrl
Purpose: used in links that point back to XpoLog from an email
Should be customized: always
Custom value: http://[UI_NODE_HOST_NAME]:[UI_NODE_PORT]/logeye/ (in multiple UI nodes environments, consider pointing the link
to a load balancer, if exists)
Note that it is highly recommended to consult XpoLog support before editing any of the following properties:
Property name: cluster.shouldCheckForMaster
Purpose: used to indicate whether the UI nodes should take over the processor node activity in case the
processor node is down
Should be customized: only if UI nodes should never take over the processor activity
Custom value: false
Property name: cluster.takeOverAttempts.notAllowedMaster
Purpose: used to indicate the number of minutes that should pass before a UI node attempts to take over the
processor activity in case the processor node is down
Should be customized: only when there’s a need to allow the processor node to be down for more than 5
minutes without a UI node taking over its activity
Custom value: numeric value larger than 5
Note: in case there is an XpoLog instance which is a dedicated to be a listener then the following has to be done for
that specific instance:
Allocate 2GB of memory (there is no need in more)
Set this specific instance to run in agent mode:
Open a browser to the instance directly
Go to Manager > Settings > General
Check the ''Agent Mode'' and save
Set this specific instance not to be recycled: Edit the file LISTENER_INSTALL_DIR/xpologInit.prop
Add the line (empty recycle cron expression):
recycleCronExpression=
Restart Listener instance
Go to the Listener Account (Manager > Administration > Listeners):
Stop the listener account
Edit the listener account
Set Listening node to be the Listener instance
Set the Indexing node to be the MASTER or one of the PROCESSORS
Save the listener account
Start the listener account
Verification
1. Open a web browser to each node at http://[NODE_HOST_NAME]:30303, go to XpoLog > Settings > General, and verify that the external
configuration directory and cluster mode are active.
2. On the shared storage device, go to EXTERNAL_CONFIGURATION_DIRECTORY/conf/general/cluster and verify that the file with suffix
.masterNode is called [PROCESSOR_NODE_NAME].masterNode. In case the file is named differently, wait two minutes and check again. If the
file still does not exist nor has a different name, verify configuration steps 3b and 4a once again.
Please review the system architecture overview for additional information: XpoLog-Center-Architecture
Installing XpoLog Cluster on a single machine
It is possible to run 2 or more instances of XpoLog on a single machine as a cluster similar to the usage of multiple machines. The procedure is
similar with a few mandatory additional steps as listed below
Installation
Follow the same steps as described in the cluster installation guide with the following changes during installation:
1. During installation set the installation directory name to be XpoLog_Center_UI (for the UI node) and
XpoLog_Center_PROCESSOR (for the PROCESSOR node). At first step you will not be able to run both
instances as there will be a port a collision since both instance use the same ports by defaults.
2. Go to PROCESSOR_INSTALL_DIR/ServletContainer/conf/ and rename the file server.xml to be server.xml.orig
and the file server.xml.processor to server.xml (this will change the ports of the processor node so that both
instances will be able to run simultaneously on the same machine).
3. Start both instances. Processor can be accessed via http://:30304 /
https://:30444 and UI can be accessed on the default ports via http://:30303 /
https://:30443
4. Please continue with the configuration section at cluster installation guide
5. If you plan to install more instances make sure all ports are changed in the INSTALL_DIR/ServletContainer/conf/server.xml as each
instance must use different ports
Note: the HTTP/S ports of all nodes can be changed in the settings section.
Windows Specifics
If you are running multiple nodes on a Windows machine then it is mandatory to give a unique name to each node during installation.
During installation ensure you specify a different installation directory and a different service name to each instance. For example:
XpoLogCenterProcessor and XpoLogCenterUI
At the end of the process, you should have a installation directory and a unique service name per each installed instance.
Note:
It is also possible to manually remove/create services but recommended to do it directly in the installation wizard:
Remove service: sc delete [SERVICE_NAME]
Create service: sc create [SERVICE_NAME] binpath= "C:\PROGRA~1\XPOLOG~1\XpoLog.exe -zglaxservice XpoLogCenter" start= auto
DisplayName= "[SERVICE_DISPLAY_NAME]"
For example: sc create XpoLogProcessor binpath= "C:\PROGRA~1\XPOLOG~1\XpoLog.exe -zglaxservice XpoLogCenter" start= auto
DisplayName= "XpoLogProcessor
It is recommended to set a service account on each of the services for an optimized connectivity to machines across the network - Windows Post
Installation
Linux/SunOS SpecificsIf you are running multiple nodes on a Linux/SunOS machine then it is recommended to allocate a specific range of CPU cores to be used by the
Processor and a specific range for the UI. For example, if there are 12 CPU cores available on the machine that the cluster is running on, the
recommended configuration should be as follows:
1. Go to PROCESSOR_INSTALL_DIR and edit the file runXpoLog.prop:
a. For Linux, specify a CPUs range, for example 0-3 for the first 4 CPUs or 4-7 for the second 4 CPUs
For SunOS, specify a space separated list of processor IDs, for example 0 1 2 3 for the first 4 CPUs or
4 5 6 7 for the second 4 CPUs
un-comment the the line #cpus= with the allocated range. For example cpus=0-7
b. Save and restart the PROCESSOR_NODE and you should see a message after restart indicating that the configured range has
be applied.
2. Go to UI_INSTALL_DIR and edit the file runXpoLog.prop:
a. For Linux, specify a CPUs range, for example 0-3 for the first 4 CPUs or 4-7 for the second 4 CPUs
For SunOS, specify a space separated list of processor IDs, for example 0 1 2 3 for the first 4 CPUs or 4 5 6 7 for the second 4
CPUs
un-comment the the line #cpus= with the allocated range. For example cpus=8-11
b. Save and restart the UI_NODE and you should see a message after restart indicating that the configured range has be applied.
Using Load Balancer
If XpoLog cluster is running with multiple UI nodes, it is recommended to install a Load Balancer in front of the XpoLog cluster so that users will be
automatically redirected to the UI nodes.
For more details on how to set up XpoLog Cluster, please see here
The following example explains how to set up Apache Httpd server as a load Load Balancer:
1. Set up Apache httpd server on Linux
2. Download the latest version of mod_jk from Apache web site. Copy this file into the Apache modules/ directory and rename it
to mod_jk.so.
3. Configure Apache:
a. Edit the file httpd.conf (Apache configuration path usually: /etc/httpd/conf/httpd.conf)
Note that the default port is usually 80, change it if needed but make sure that the ports you use are available.
Add the following right after the other LoadModule directives:
LoadModule jk_module modules/mod_jk.so
Add the following at the end of the httpd.conf file:
# Where to find workers.properties
JkWorkersFile /etc/httpd/conf/workers.properties
# Where to put jk shared memory
JkShmFile /var/log/httpd/mod_jk.shm
# Where to put jk logs
JkLogFile /var/log/httpd/mod_jk.log
# Set the jk log level [debug/error/info]
JkLogLevel info
# Select the timestamp log format
JkLogStampFormat "[%a %b %d %H:%M:%S %Y] "
JkMount /* balancer
JkMount /*/* balancer
b. Create workers.properties
The worker.properties file should be located in the /etc/httpd/conf/ directory. In the file you should have
the following parameters set for the UI1, UI2 nodes:
Define the list of workers that will be used
worker.list=balancer
worker.balancer.type=lb
worker.balancer.balance_workers=ui1,ui2b.
worker.balancer.method=B
# Specifies whether requests with SESSION ID''s
# should be routed back to the same #Tomcat worker
worker.balancer.sticky_session=True
# Define UI1
worker.ui1.port=
worker.ui1.host=
worker.ui1.type=ajp13
worker.ui1.lbfactor=1
# Define UI2
worker.ui2.port=
worker.ui2.host=
worker.ui2.type=ajp13
worker.ui2.lbfactor=1
4. Configure Tomcat on each of the UI nodes:
a. On each of the XpoLog UI nodes, edit the file
/ServletContainer/conf/server.xml
b. Go to the following XML Tag:
Set up the port to the same as
Go to the following Engine tag to set the ui node name must be like the one in the Apache
workers.properties used name:
The following sequence is the log structure definition for the log4net log %timestamp [%thread] %-5level %logger %ndc - %message%newline
In XpoLog such pattern will be translated into:
for more information see below:
{date:Date,locale=en,yyyy-MM-dd HH:mm:ss,SSS} [{text:Thread,ftype=thread}]
[{priority:Priority,ftype=severity;,DEBUG;INFO;WARNING;ERROR;FATAL}] {string:Logger,ftype=logger} {string:NDC,ftype=ndc} -
{string:Message,ftype=message}
Apache Log4Net Conversion Table
logtyep should be set to: log4net
Name and Description XpoLog ftype
Appears with Pattern
a Equivalent to appdomain
appdomain Used to output the friendly name of the AppDomain where the logging event was generated.
aspnet-cache Used to output all cache items in the case of %aspnet-cache or just one named item if used as %aspnet-cache{key}
This pattern is not available for Compact Framework or Client Profile assemblies.
aspnet-context Used to output all context items in the case of %aspnet-context or just one named item if used as %aspnet-context{ke
y}
This pattern is not available for Compact Framework or Client Profile assemblies.
aspnet-request Used to output all request parameters in the case of %aspnet-request or just one named param if used as %aspnet-re
quest{key}
This pattern is not available for Compact Framework or Client Profile assemblies.
aspnet-session Used to output all session items in the case of %aspnet-session or just one named item if used as %aspnet-session{k
ey}
This pattern is not available for Compact Framework or Client Profile assemblies.
c Equivalent to logger
C Equivalent to type class Equivalent to type
d Equivalent to date
date Used to output the date of the logging event in the local time zone. To output the date in universal time use
the %utcdate pattern. The date conversion specifier may be followed by a date format specifier enclosed between
braces. For example, %date{HH:mm:ss,fff} or %date{dd MMM yyyy HH:mm:ss,fff}. If no date format specifier is given
then ISO8601 format is assumed (Iso8601DateFormatter).
The date format specifier admits the same syntax as the time pattern string of the ToString.
For better results it is recommended to use the log4net date formatters. These can be specified using one of the strings
"ABSOLUTE", "DATE" and "ISO8601" for specifying AbsoluteTimeDateFormatter, DateTimeDateFormatter and
respectively Iso8601DateFormatter. For example, %date{ISO8601} or%date{ABSOLUTE}.
These dedicated date formatters perform significantly better than ToString.
exception Used to output the exception passed in with the log message.
If an exception object is stored in the logging event it will be rendered into the pattern output with a trailing newline. If
there is no exception then nothing will be output and no trailing newline will be appended. It is typical to put a newline
before the exception and to have the exception as the last data in the pattern.
F Equivalent to file
file Used to output the file name where the logging request was issued.
WARNING Generating caller location information is extremely slow. Its use should be avoided unless execution speed is
not an issue.
See the note below on the availability of caller location information.
identity Used to output the user name for the currently active user (Principal.Identity.Name).
WARNING Generating caller information is extremely slow. Its use should be avoided unless execution speed is not an
issue.
l Equivalent to location
L Equivalent to line
location Used to output location information of the caller which generated the logging event.
The location information depends on the CLI implementation but usually consists of the fully qualified name of the calling
method followed by the callers source the file name and line number between parentheses.
The location information can be very useful. However, its generation is extremely slow. Its use should be avoided unless
execution speed is not an issue.
See the note below on the availability of caller location information.
level Used to output the level of the logging event.
line Used to output the line number from where the logging request was issued.
WARNING Generating caller location information is extremely slow. Its use should be avoided unless execution speed is
not an issue.
See the note below on the availability of caller location information.
logger Used to output the logger of the logging event. The logger conversion specifier can be optionally followed by precision
specifier, that is a decimal constant in brackets.
If a precision specifier is given, then only the corresponding number of right most components of the logger name will be
printed. By default the logger name is printed in full.
For example, for the logger name "a.b.c" the pattern %logger{2} will output "b.c".
m Equivalent to message
M Equivalent to method
message Used to output the application supplied message associated with the logging event.
mdc The MDC (old name for the ThreadContext.Properties) is now part of the combined event properties. This pattern is
supported for compatibility but is equivalent to property.
method Used to output the method name where the logging request was issued.
WARNING Generating caller location information is extremely slow. Its use should be avoided unless execution speed is
not an issue.
See the note below on the availability of caller location information.n Equivalent to newline
newline Outputs the platform dependent line separator character or characters.
This conversion pattern offers the same performance as using non-portable line separator strings such as "\n", or "\r\n".
Thus, it is the preferred way of specifying a line separator.
ndc Used to output the NDC (nested diagnostic context) associated with the thread that generated the logging event.
p Equivalent to level
P Equivalent to property
properties Equivalent to property
property Used to output the an event specific property. The key to lookup must be specified within braces and directly following
the pattern specifier, e.g.%property{user} would include the value from the property that is keyed by the string ''user''.
Each property value that is to be included in the log must be specified separately. Properties are added to events by
loggers or appenders. By default the log4net:HostName property is set to the name of machine on which the event was
originally logged.
If no key is specified, e.g. %property then all the keys and their values are printed in a comma separated list.
The properties of an event are combined from a number of different contexts. These are listed below in the order in
which they are searched.
the event properties
The event has Properties that can be set. These properties are specific to this event only.
the thread properties
The Properties that are set on the current thread. These properties are shared by all events logged on this thread.
the global properties
The Properties that are set globally. These properties are shared by all the threads in the AppDomain.
r Equivalent to timestamp
stacktrace Used to output the stack trace of the logging event The stack trace level specifier may be enclosed between braces. For
example, %stacktrace{level}. If no stack trace level specifier is given then 1 is assumed
Output uses the format: type3.MethodCall3 > type2.MethodCall2 > type1.MethodCall1
This pattern is not available for Compact Framework assemblies.
stacktracedetail Used to output the stack trace of the logging event The stack trace level specifier may be enclosed between braces. For
example, %stacktracedetail{level}. If no stack trace level specifier is given then 1 is assumed
Output uses the format: type3.MethodCall3(type param,...) > type2.MethodCall2(type param,...) >
type1.MethodCall1(type param,...)
This pattern is not available for Compact Framework assemblies.
t Equivalent to thread
timestamp Used to output the number of milliseconds elapsed since the start of the application until the creation of the logging
event.
thread Used to output the name of the thread that generated the logging event. Uses the thread number if no name is available.
type Used to output the fully qualified type name of the caller issuing the logging request. This conversion specifier can be
optionally followed by precision specifier, that is a decimal constant in brackets.
If a precision specifier is given, then only the corresponding number of right most components of the class name will be
printed. By default the class name is output in fully qualified form.
For example, for the class name "log4net.Layout.PatternLayout", the pattern %type{1} will output "PatternLayout".
WARNING Generating the caller class information is slow. Thus, its use should be avoided unless execution speed is
not an issue.
See the note below on the availability of caller location information.
u Equivalent to identity
username Used to output the WindowsIdentity for the currently active user.
WARNING Generating caller WindowsIdentity information is extremely slow. Its use should be avoided unless execution
speed is not an issue.utcdate Used to output the date of the logging event in universal time. The date conversion specifier may be followed by a date
format specifier enclosed between braces. For example, %utcdate{HH:mm:ss,fff} or %utcdate{dd MMM yyyy
HH:mm:ss,fff}. If no date format specifier is given then ISO8601 format is assumed (Iso8601DateFormatter).
The date format specifier admits the same syntax as the time pattern string of the ToString.
For better results it is recommended to use the log4net date formatters. These can be specified using one of the strings
"ABSOLUTE", "DATE" and "ISO8601" for specifying AbsoluteTimeDateFormatter, DateTimeDateFormatter and
respectively Iso8601DateFormatter. For example, %utcdate{ISO8601} or%utcdate{ABSOLUTE}.
These dedicated date formatters perform significantly better than ToString.
w Equivalent to username
x Equivalent to ndc
X Equivalent to mdc
% The sequence %% outputs a single percent sign.
Apache Tomcat (Ver 7+)
Tomcat server can be configured to use different type of logging systems, the server has default logging configuration and can be configured to
use log4j. Tomcat can also create access logs based on the Access Log Valve.
Tagging
All Tomcat/Catalina logs are tagged by logtype - tomcat
In addition there are the following log types that must be assigned for the Tomcat App to be deployed on:
Catalina out log or Console out log will be tagged by logtype - out
Catalina Servlet or webapp log will be tagged by logtype - servlet
Access logs will be tagged by logtype - access
Defualt logging configuration can be found under the conf direcotry (tomcat/conf) or *nix /etc/tomcat../conf/ logging.properties file. Usually the
access log will be defined in server.xml file under the conf directory.
Default Log structure and configuration
For log type out and servlet, those logs can be name by default catalina.out, and will be located under the logs directory. Use the following
XpoLog pattern for those logs
{date:Date,dd-MMM-yyyy HH:mm:ss.SSS} {priority:Prioriryt,ftype=severity,ALL;FINEST;FINER;FINE;INFO;CONFIG;WARNING;SEVERE}
[{text:thread,ftype=thread}] {text:Source,ftype=source} {string:Message,ftype=message}
Custom Logging
If the Tomcat server is configured to use external logging with log4j or other java.util framework than use XpoLog pattern wizard and defenition to
configure the log pattern correctly for the app to work.
Make sure that if you are using log4j wizard you will need to setup the log sources and manually apply the Tomcat/Catalina tags on them for the
App to work correctly.
References
Tomcat 7
https://tomcat.apache.org/tomcat-7.0-doc/logging.html
Access Logs: https://tomcat.apache.org/tomcat-7.0-doc/config/valve.html#Access_Logging
Tomcat 8
Logging: https://tomcat.apache.org/tomcat-8.0-doc/logging.html
Access Logs: https://tomcat.apache.org/tomcat-8.0-doc/config/valve.html#Access_Logging
Tomcat 9
https://tomcat.apache.org/tomcat-9.0-doc/logging.html
Access Logs: https://tomcat.apache.org/tomcat-9.0-doc/config/valve.html#Access_Logging
Log4J
If the Server is using the log4j library for logging please follow the steps documented in adding logs from log4j 1.2 or log4j 2.*
Tomcat Access Logs Configuration
1. Add Log Data In XpoLog, When adding a log to XpoLog you can now select the Log Type (logtype) for Apache Tomcat Access with the
following logtypes:
a. tomcat
i. in addition select the log type - access
Tomcat access logs are created with the AccessLogValve or with ExtendedAccessLogValve implementation.
For the configuration look into the server / (Linux "/etc/tomcat/conf/server.xml") or other webapp
configuration files and search for the following:
The pattern field may defined also as below:
The shorthand pattern pattern="common" corresponds to the Common Log Format defined by %h %l %u %t "%r" %s %b
The shorthand pattern pattern="combined" appends the values of the Referer and User-Agent headers, each in double quotes, to the co
mmon pattern.
In XpoLog such pattern (combined) will be translated into:
{ip:Client IP,ftype=remoteip;type=;,} {string:Remote Log Name,ftype=remotelog;,} {string:Remote User,ftype=remoteuser;,}
[{date:Date,locale=en,dd/MMM/yyyy:HH:mm:ss z}] "{choice:Method,ftype=reqmethod;,GET;POST}
{string:URL,ftype=requrl;,}{block,start,emptiness=true}?{string:Query,ftype=querystring;,}{block,end,emptiness=true}
{string:reqprotocol,ftype=reqprotocol;,}" {number:Status,ftype=respstatus;,} {number:Bytes Sent,ftype=bytesent;,}
"{string:Referer,ftype=referer;,}" "{string:User Agent,ftype=useragent;,}"{eoe}
In XpoLog such pattern (common) will be translated into:
{ip:Client IP,ftype=remoteip;type=;,} {string:Remote Log Name,ftype=remotelog;,} {string:Remote User,ftype=remoteuser;,}
[{date:Date,locale=en,dd/MMM/yyyy:HH:mm:ss z}] "{choice:Method,ftype=reqmethod;,GET;POST}
{string:URL,ftype=requrl;,}{block,start,emptiness=true}?{string:Query,ftype=querystring;,}{block,end,emptiness=true}
{string:reqprotocol,ftype=reqprotocol;,}" {number:Status,ftype=respstatus;,} {number:Bytes Sent,ftype=bytesent;,}{eoe}
XpoLog Pattern Wizard
When configuring access logs for Tomcat in the XpoLog pattern wizard, paste the pattern directive value into the wizard in order to generate the
correct XpoLog pattern for our example you will need to paste: %h %l %u %t "%r" %s %b
Note: If the pattern value is common or combined simply past them into the wizard and XpoLog will build the right pattern as well.
Apache Tomcat Access Log Format Conversion Table both for AccessLogValve and for ExtendedAccessLogValve
logtyep should be set to: tomcat, access
Format String Description XpoLog Pattern XpoLog ftype%a Remote IP-address {ip:RemoteIP,ftype=remoteip} remoteip
%A Local IP-address {ip:LocalIP,ftype=localip} localip
%B Size of response in bytes, excluding HTTP headers. {number:BytesSent,ftype=bytesent} bytesent
%b Bytes sent, excluding HTTP headers, or ''-'' if zero {text:BytesSent,ftype=bytesent} bytesent
%{Foobar}C The contents of cookie Foobar in the request sent to the {string:Cookie_< FOOBAR >}
server. Only version 0 cookies are fully supported.
Replace < FOOBAR > with cookie name
%D The time taken to serve the request, in microseconds. {number:ResponseTimeMicroSecs,ftype=responsetime responsetimemicro
micro}
%F Time taken to commit the response, in millis {number:ResponseTimeMilliSecs,ftype=responsetimemil responsetimemilli
li}
%h Remote host name (or IP address if enableLookups for the {text:Remotehost,ftype=remotehost} remotehost
connector is false)
%H The request protocol { text:RequestProtocol,ftype=reqprotocol} reqprotocol
%{Foobar}i The contents of Foobar: header line(s) in the request sent {text:}
to the server. Changes made by other modules (e.g. mod_
headers) https://en.wikipedia.org/wiki/List_of_HTTP_header_fields
and so on it goes for the different headers.
affect this. If you''re interested in what the request header
was prior to when most modules would have modified it,
use mod_setenvif to copy the header into an
internal environment variable and log that value with the %{
VARNAME}e described above.
%{Referer}i Referer { text:Referer,ftype=referer} referer
%{User-agent}i User-agent { text:User-agent,ftype=useragent} useragent
%{X-Forwarded- X-Forwarded-For {text: X-Forwarded-For,ftype=forwardforip} OR forwardforip
For}i
{ip: X-Forwarded-For,ftype=forwardforip}
%I Current request thread name (can compare later with {text:RequestThread,ftype=thread} thread
stacktraces)
%l Remote logical username from identd (always returns ''-'') {text:logicalname, ftype=logicalname} logicalname
%m The request method {text:RequestMethod,ftype=reqmethod} reqmethod
%{Foobar}o write value of outgoing header with name xxx {string:}
%p The canonical local port of the server serving the request {number:ServerPort,ftype=serverport} serverport
%{format}p The canonical local port of the server serving the request {number:ServerPort,ftype=serverport} serverport
or the server''s actual port or the client''s actual port. Valid
formats are canonical, local, or remote. {number:LocalServerPort,ftype=localserverport} localserverportt
%{canonical}p {number:RemotePort,ftype=remoteport} remoteport
%{local}p
%{remote}p
%q The query string (prepended with a ? if a query string {text:QueryString,ftype=querystring} querystring
exists, otherwise an empty string)
OR
Suggest a regexp that will build a list of parameters as
cloumns.
The query string (prepended with a ? if a query string
exists, otherwise an empty string)%r First line of the request (method and request URI) {text:FirstLine,ftype=reqfirstline} reqfirstline
TBD - might be parsed to multiple value and types}
%s Status. For requests that got internally redirected, this is {number:ResponseStatus,ftype=respstatus} respstatus
the status of the *original* request --- %>s for the last.
. For requests that got internally redirected, this is the
status of the *original* request --- %>s for the last.
%S User session ID {text:UserSessionId,ftype=sessionid} sessionid
%t Time the request was received (standard english format) {date:Date,locale=en,dd/MMM/yyyy:HH:mm:ss z}
%{format}t The time, in the form given by format, which should be in {date:Date,locale=en,dd/MMM/yyyy:HH:mm:ss z}
an extended strftime(3) format (potentially localized). If
the format starts with begin: (default) sec number of seconds since the Epoch
msec number of milliseconds since the Epoch
the time is taken at the beginning of the request usec number of microseconds since the Epoch
processing. If it starts with end: it is the time when the log msec_frac millisecond fraction
entry gets written, close to the end of the request usec_frac microsecond fraction
processing. In addition to the formats supported by strft
ime(3), the following format tokens are supported:
sec number of seconds since the Epoch
msec number of milliseconds since the Epoch
usec number of microseconds since the Epoch
msec_frac millisecond fraction
usec_frac microsecond fraction
These tokens can not be combined with each other or str
ftime(3) formatting in the same format string. You can
use multiple %{format}t tokens instead.
The extended strftime(3) tokens are available in 2.2.30
and later.
%T The time taken to serve the request, in seconds. {number:ResponseTimeSecs,,ftype=processrequestsec} processrequestseci
%u Remote user that was authenticated (if any), else ''-'' {text:User,ftype=remoteuser} remoteuser
Remote user (from auth; may be bogus if return status (
%s) is 401)
%U The URL path requested, not including any query string. {text:RequestURL,ftype=requrl} requrl
The URL path requested, not including any query string.
%v Local server name {text:ServerName,ftype=servername} servername
The ExtendedAccessLogValve conversion table below:
Format Description XpoLog Pattern XpoLog ftype
String
bytes Bytes sent, excluding HTTP headers, or ''-'' if zero {text:BytesSent,ftype=bytesent} bytesent
c-dns Remote host name (or IP address if enableLoo {ip:RemoteIP,ftype=remoteip} remoteip
kups for the connector is false)
c-ip Remote IP address {ip:RemoteIP,ftype=remoteip} remoteip
cs-method Request method (GET, POST, etc.) {text:RequestMethod,ftype=reqmethod} reqmethod
cs-uri Request URI {text:FirstLine,ftype=reqfirstline} reqfirstline
TBD - might be parsed to multiple value and types}cs-uri-query Query string (prepended with a ''?'' if it exists) {text:QueryString,ftype=querystring} querystring
OR
Suggest a regexp that will build a list of parameters as cloumns.
The query string (prepended with a ? if a query string exists,
otherwise an empty string)
cs-uri-stem Requested URL path {text:RequestURL,ftype=requrl} requrl
The URL path requested, not including any query string.
date The date in yyyy-mm-dd format for GMT {date:Date,locale=en,yyyy-MM-dd} TBD - time and date in sperate
fileds.
s-dns Local host name {text:ServerName,ftype=servername} servername
s-ip Local IP address {ip:LocalIP,ftype=localip} localip
sc-status HTTP status code of the response {number:ResponseStatus,ftype=respstatus} respstatus
. For requests that got internally redirected, this is the status of the
*original* request --- %>s for the last.
time Time the request was served in HH:mm:ss {date:Date,locale=en,HH:mm:ss} TBD - time and date in sperate
format for GMT fileds.
time-taken Time (in seconds as floating point) taken to serve {number:ResponseTimeSecs,,ftype=processrequestsec} processrequestseci
the request
x-threadname Current request thread name (can compare later {text:RequestThread,ftype=thread} thread
with stacktraces)
Linux
Background
The Linux Servers logs analysis App automatically Collect - Read - Parse - Analyzes - Reports all machine generated log data of the server and
presents a comprehensive set of graphs and reports to analyze machine generated data. Use a predefined set of dashboards and gadgets to
visualize and address the system software, code written, and infrastructure during development, testing, and production. This Linux logs analysis
App helps measure, troubleshoot, and optimize your servers integrity, stability and quality with the several visualization and investigation
dashboards.
Steps:
1. The Linux App is running on messages/syslog, auth/secure, mail, kern and cron standard logs.
When adding/editing the logs to XpoLog it is mandatory to apply the correct log type(s) to each of the logs:
a. linux - all logs that the application will analyze must have linux as a log type
b. linux-messages/linux-syslog - only the messages/syslog logs must also be configured to have linux-messages/linux-syslog
as a log type
c. linux-auth/linux-secure - only the auth/secure logs must also be configured to have linux-auth/linux-secure as a log type
d. linux-cron - only the cron log must also be configured to have linux-cron as a log type
e. linux-mail - only the mail log must also be configured to have linux-mail as a log type
f. linux-kernel - only the kern log must also be configured to have linux-kernel as a log type
2. Once the required information is set, on each log click next and edit the log pattern, this step is crucial to the accuracy and deployment of
the Linux App. Use the following patterns for each of the logs:
a. Linux messages/syslog log:
{date:Date,MMM dd HH:mm:ss} {text:source,ftype=source} {text:process
name,ftype=process;,}{block,start,emptiness=true}[{text:pid,ftype=pid}]{block,end,emptiness=true}:
{text:Message,ftype=message;,}{regexp:User,ftype=user;refName=message,[passed|failed] for (.*) from}
b. Linux auth/secure log:
{date:Date,MMM dd HH:mm:ss} {text:SourceIP,ftype=source}
{text:Process,ftype=process}{block,start,emptiness=true}[{text:pid,ftype=pid}]{block,end,emptiness=true}:
{text:Message,ftype=message}
c. Linux cron log:
{date:Date,MMM dd HH:mm:ss} {text:Server,ftype=server}
{text:Process,ftype=process}{block,start,emptiness=true}[{text:pid,ftype=pid}]{block,end,emptiness=true}:
{text:Message,ftype=message}
d. Linux mail log:
{date:Date,MMM dd HH:mm:ss} {text:source} {text:process
name,ftype=process;,}{block,start,emptiness=true}[{number:process id}]{block,end,emptiness=true}:
{regexp:session,refName=Message;ftype=session,^(\w+):}{regexp:From,refName=Message;ftype=from,\s+from=([^,]+)}
{regexp:To,refName=Message;ftype=to,\s+to=([^,]+)}{text:Message,ftype=message;,}{text:Message,ftype=message;,}
e.2.
e. Linux kernel log:
{date:Date,MMM dd HH:mm:ss} {text:source,ftype=source} {text:process name,ftype=process;,}:
[{text:time-taken,ftype=time-taken;,}] {text:Message,ftype=message}
Microsoft IIS (Ver 6)
Background
The Microsoft IIS Server logs analysis App automatically Collect - Read - Parse - Analyzes - Reports all web machine generated log data of the
server and presents a comprehensive set of graphs and reports to analyze machine generated data. Use a predefined set of dashboards and
gadgets to visualize and address the system software, code written, and infrastructure during development, testing, and production. This
Microsoft IIS logs analysis App helps measure, troubleshoot, and optimize your servers integrity, stability and quality with visualization and
investigation dashboards.
Steps
1. Add Log Data In XpoLog, When adding a log to XpoLog you can now select the Log Type (logtype) for Microsoft IIS these are the
following logtypes:
a. iis
i. in addition select not only iis but also you will need to select the log type - access or error
2. Once all required information is set click next and edit the log pattern, this step is crucial to the accuracy and deployment of the Analytic
App. Use the following conversion table in order to build XpoLog pattern out of the access log format.
Example
In the header of IIS access logs , or on the IIS configuration file locate the format specification strings that configure the logged fields for example:
#Fields: date time s-sitename s-computername s-ip cs-method cs-uri-stem cs-uri-query s-port cs-username c-ip cs-version cs(User-Agent)
cs(Cookie) cs(Referer) cs-host sc-status sc-substatus sc-win32-status sc-bytes cs-bytes time-taken
The following sequence is the log structure definition: date time s-sitename s-computername s-ip cs-method cs-uri-stem cs-uri-query s-port
cs-username c-ip cs-version cs(User-Agent) cs(Cookie) cs(Referer) cs-host sc-status sc-substatus sc-win32-status sc-bytes cs-bytes time-taken
In XpoLog such pattern will be translated into:
{date:Date,yyyy-MM-dd HH:mm:ss} {text:SiteName,ftype=sitename} {text:ServerName,ftype=servername} {geoip:ServerIP,ftype=
localip} {text:RequestMethod,ftype=reqmethod} {text:RequestURL,ftype=requrl} {text:QueryString,ftype=querystring}
{number:ServerPort,ftype=serverport} {text:username,ftype=remoteuser} {geoip:ClientIP,ftype=remoteip}
{text:ProtocolVer,ftype=protocolversion} {text:User-agent,ftype=useragent} {text:Cookie,ftype=cookie} {text:Referer,ftype=referer}
{text:HostName,ftype=hostname} {number:ResponseStatus,ftype=respstatus} {number:SubStatus,ftype=ressubstatus}
{text:Win32Status,ftype=win32status} {number:BytesSent,ftype=bytesent} {number:BytesSent,ftype=bytesreceived}
{number:ResponseTimeSecs,ftype=processrequestmilli}{eoe}
for more information see below the format Conversion Table
logtype should be set to: iis, access
Format Apear as Description XpoLog Pattern ftype
String
Date + Time date time The date on which the activity occurred. {date,yyyy-MM-dd HH:mm:ss}
The time, in coordinated universal time
(UTC), at which the activity occurred.
Client IP c-ip The IP address of the client that made the {geoip:ClientIP,ftype=remoteip} remoteip
Address request.
User Name cs-username The name of the authenticated user who {text:username,ftype=remoteuser} remoteuser
accessed your server. Anonymous users
are indicated by a hyphen.
Service Name s-sitename The Internet service name and instance {text:SiteName,ftype=sitename} sitename
and Instance number that was running on the client.
Number
Server Name s-computername The name of the server on which the log {text:ServerName,ftype=servername} servername
file entry was generated.Server IP s-ip The IP address of the server on which the {ip:ServerIP,ftype= localip} localip
Address log file entry was generated.
Server Port s-port The server port number that is configured {number:ServerPort,ftype=serverport} serverport
for the service.
Method cs-method The requested action, for example, a GET {text:RequestMethod,ftype=reqmethod} reqmethod
method.
URI Stem cs-uri-stem The target of the action, for example, {text:RequestURL,ftype=requrl} requrl
Default.htm.
URI Query cs-uri-query The query, if any, that the client was trying {text:QueryString,ftype= querystring} querystring
to perform. A Universal Resource
Identifier (URI) query is necessary only for
dynamic pages.
HTTP Status sc-status The HTTP status code. {number:ResponseStatus,ftype=respstatus} respstatus
Win32 Status sc-win32-status The Windows status code. {text:Win32Status,ftype=win32status} win32status
Bytes Sent sc-bytes The number of bytes that the server sent. {number:BytesSent,ftype=bytesent} bytesent
Bytes cs-bytes The number of bytes that the server {number:BytesSent,ftype= bytesreceived} bytesreceived
Received received.
Time Taken time-taken The length of time that the action took, in {number:ResponseTimeSecs,,ftype=processrequestmilli} processrequestmilli
milliseconds.
Protocol cs-version The protocol version —HTTP or FTP {text:ProtocolVer,,ftype=protocolversion} protocolversion
Version —that the client used.
Host cs-host The host header name, if any. {text:HostName,ftype=hostname} hostname
User Agent cs(User-Agent) The browser type that the client used. {text:User-agent,ftype=useragent} useragent
Cookie cs(Cookie) The content of the cookie sent or {text:Cookie},ftype=cookie} cookie
received, if any.
Referrer cs(Referrer) The site that the user last visited. This site {text:Referer,ftype=referer} referer
provided a link to the current site.
Protocol sc-substatus The substatus error code. {numbe:SubStatus,ftype=ressubstatus} ressubstatus
Substatus
Microsoft IIS (Ver 7)
Background
The Microsoft IIS Server logs analysis App automatically Collect - Read - Parse - Analyzes - Reports all web machine generated log data of the
server and presents a comprehensive set of graphs and reports to analyze machine generated data. Use a predefined set of dashboards and
gadgets to visualize and address the system software, code written, and infrastructure during development, testing, and production. This
Microsoft IIS logs analysis App helps measure, troubleshoot, and optimize your servers integrity, stability and quality with visualization and
investigation dashboards.
Steps
1. Add Log Data In XpoLog, When adding a log to XpoLog you can now select the Log Type (logtype) for Microsoft IIS these are the
following logtypes:
a. iis
i. in addition select not only iis but also you will need to select the log type - access or error
2. Once all required information is set click next and edit the log pattern, this step is crucial to the accuracy and deployment of the Analytic
App. Use the following conversion table in order to build XpoLog pattern out of the access log format.
Example
In the header of IIS access logs , or on the IIS configuration file locate the format specification strings that configure the logged fields for example:
#Fields: date time s-sitename s-computername s-ip cs-method cs-uri-stem cs-uri-query s-port cs-username c-ip cs-version cs(User-Agent)cs(Cookie) cs(Referer) cs-host sc-status sc-substatus sc-win32-status sc-bytes cs-bytes time-taken
The following sequence is the log structure definition: date time s-sitename s-computername s-ip cs-method cs-uri-stem cs-uri-query s-port
cs-username c-ip cs-version cs(User-Agent) cs(Cookie) cs(Referer) cs-host sc-status sc-substatus sc-win32-status sc-bytes cs-bytes time-taken
In XpoLog such pattern will be translated into:
{date:Date,yyyy-MM-dd HH:mm:ss} {text:SiteName,ftype=sitename} {text:ServerName,ftype=servername} {geoip:ServerIP,ftype=
localip} {text:RequestMethod,ftype=reqmethod} {text:RequestURL,ftype=requrl} {text:QueryString,ftype=querystring}
{number:ServerPort,ftype=serverport} {text:username,ftype=remoteuser} {geoip:ClientIP,ftype=remoteip}
{text:ProtocolVer,ftype=protocolversion} {text:User-agent,ftype=useragent} {text:Cookie,ftype=cookie} {text:Referer,ftype=referer}
{text:HostName,ftype=hostname} {number:ResponseStatus,ftype=respstatus} {number:SubStatus,ftype=ressubstatus}
{text:Win32Status,ftype=win32status} {number:BytesSent,ftype=bytesent} {number:BytesSent,ftype=bytesreceived}
{number:ResponseTimeSecs,ftype=processrequestmilli}{eoe}
for more information see below the format Conversion Table
logtype should be set to: iis, access
Format Apear as Description XpoLog Pattern ftype
String
Date + Time date time The date on which the activity occurred. {date,yyyy-MM-dd HH:mm:ss}
The time, in coordinated universal time
(UTC), at which the activity occurred.
Client IP c-ip The IP address of the client that made the {ip:ClientIP,ftype=remoteip} remoteip
Address request.
User Name cs-username The name of the authenticated user who {test:username,ftype= remoteuser} remoteuser
accessed your server. Anonymous users
are indicated by a hyphen.
Service Name s-sitename The Internet service name and instance {test:SiteName,ftype=sitename} sitename
and Instance number that was running on the client.
Number
Server Name s-computername The name of the server on which the log {string:ServerName,ftype=servername} servername
file entry was generated.
Server IP s-ip The IP address of the server on which the {ip:ServerIP,ftype= localip} localip
Address log file entry was generated.
Server Port s-port The server port number that is configured {number:ServerPort,ftype=serverport} serverport
for the service.
Method cs-method The requested action, for example, a GET {text:RequestMethod,ftype=reqmethod} reqmethod
method.
URI Stem cs-uri-stem The target of the action, for example, {text:RequestURL,ftype=requrl} requrl
Default.htm.
URI Query cs-uri-query The query, if any, that the client was trying {text:QueryString,ftype= querystring } querystring
to perform. A Universal Resource
Identifier (URI) query is necessary only for
dynamic pages.
HTTP Status sc-status The HTTP status code. {number:ResponseStatus,ftype=respstatus} respstatus
Win32 Status sc-win32-status The Windows status code. {text:Win32Status,ftype=win32status} win32status
Bytes Sent sc-bytes The number of bytes that the server sent. {number:BytesSent,ftype=bytesent} bytesent
Bytes cs-bytes The number of bytes that the server {number:BytesSent,ftype= bytesreceived} bytesreceived
Received received.
Time Taken time-taken The length of time that the action took, in {number:ResponseTimeSecs,,ftype=processrequestmilli} processrequestmilli
milliseconds.
Protocol cs-version The protocol version —HTTP or FTP {text:ProtocolVer,,ftype=protocolversion} protocolversion
Version —that the client used.
Host cs-host The host header name, if any. {text:HostName,ftype=hostname} hostname
User Agent cs(User-Agent) The browser type that the client used. {text:User-agent,ftype=useragent} useragentCookie cs(Cookie) The content of the cookie sent or {string:Cookie},ftype=cookie cookie
received, if any.
Referrer cs(Referrer) The site that the user last visited. This site {text:Referer,ftype=referer} referer
provided a link to the current site.
Protocol sc-substatus The substatus error code. {number:SubStatus,ftype=ressubstatus} ressubstatus
Substatus
Microsoft Windows
Background
The Microsoft Windows Servers logs analysis App automatically Collect - Read - Parse - Analyzes - Reports all machine generated log data of the
server and presents a comprehensive set of graphs and reports to analyze machine generated data. Use a predefined set of dashboards and
gadgets to visualize and address the system software, code written, and infrastructure during development, testing, and production. This
Windows logs analysis App helps measure, troubleshoot, and optimize your servers integrity, stability and quality with the several visualization
and investigation dashboards.
Steps:
1. The Microsoft Windows App is running on Application, Security and System standard event logs (*.evtx).
When adding/editing the logs to XpoLog it is mandatory to apply the correct log type(s) to each of the logs:
a. windows - all logs that the application will analyze must have windows as a log type
b. windows-application - only the Application log must also be configured to have windows-application as a log type
c. windows-security - only the Security log must also be configured to have windows-security as a log type
d. windows-system - only the System log must also be configured to have windows-system as a log type
2. Once the required information is set, on each log click next and edit the log pattern, this step is crucial to the accuracy and deployment of
the Microsoft Windows App. Use the following patterns for each of the logs:
a. Windows Application event log:
{priority:Type,ftype=type,Error;Warning;Information;Success;Audit Failure;Audit
Success}*;*{timestamp:Date,MM/dd/yyyy HH:mm:ss}{regexp:Account Name,refName=Description;ftype=account
name,Account Name:\s+(\S+).*}{regexp:Account Domain,refName=Description;ftype=domain,Account
Domain:\s+(\S+).*}*;*{text:Source,ftype=source}*;*{text:Category,ftype=category}*;*{number:Event,ftype=event}*;*{text:
User,ftype=user}*;*{text:Computer,ftype=computer}*;*{string:Description,ftype=description}
b. Windows Security event log:
{priority:Type,ftype=type,Error;Warning;Information;Success;Audit Failure;Audit
Success}*;*{timestamp:Date,MM/dd/yyyy HH:mm:ss}{regexp:Account Name,refName=Description;ftype=account
name,Account Name:\s+(\S+).*}{regexp:Account Domain,refName=Description;ftype=domain,Account
Domain:\s+(\S+).*}*;*{text:Source,ftype=source}*;*{text:Category,ftype=category}*;*{number:Event,ftype=event}{map:Ev
ent Description,ftype=event
description;refIndex=6,file:knowledge/repository/system/win/map/winEventsMap.prop}{map:Category
Description,ftype=category
description;refIndex=6,file:knowledge/repository/system/win/map/winEventsCategoryMap.prop}{map:Sub
Category,ftype=sub
category;refIndex=6,file:knowledge/repository/system/win/map/winEventsSubCategoryMap.prop}*;*{text:User,ftype=us
er}{regexp:Logon ID,refName=description;ftype=logon id,Logon
ID:\s+(\S+).*}*;*{text:Computer,ftype=computer}*;*{string:Description,ftype=description}
c. Windows System event log:
{priority:Type,ftype=type,Error;Warning;Information;Success;Audit Failure;Audit
Success}*;*{timestamp:Date,MM/dd/yyyy HH:mm:ss}{regexp:Account Name,refName=Description;ftype=account
name,Account Name:\s+(\S+).*}{regexp:Account Domain,refName=Description;ftype=domain,Account
Domain:\s+(\S+).*}*;*{text:Source,ftype=source}*;*{text:Category,ftype=category}*;*{number:Event,ftype=event}*;*{text:
User,ftype=user}*;*{text:Computer,ftype=computer}*;*{string:Description,ftype=description}
NGINX (Ver 1.10+)
Background
The NGINX server logs analysis App automatically Collect - Read - Parse - Analyzes - Reports all machine''s generated log data of the server andpresents a comprehensive set of graphs and reports to analyze machine generated data. Use a predefined set of dashboards and gadgets to
visualize and address the system software, code written, and infrastructure during development, testing, and production. This NGINX server logs
analysis App helps you measure, troubleshoot, and optimize your servers integrity, stability and quality with visualization and investigation
dashboards.
Steps
1. Add Log Data In XpoLog, When adding a log to XpoLog you can now select the Log Type (logtype) for NGINX with e the following
logtypes:
a. nginx
i. in addition select not only httpd but also the log type - access or error
ii. see error log definition at the bottom of this page
2. Once all required information is set click next and edit the log pattern, this step is crucial to the accuracy and deployment of the Analytic
App. Use the following conversion table to build the XpoLog pattern.
Example
In the NGINX configuration file, usually nginx.conf by default, located under the conf/ directory (Linux "NGINX ROOT DIR/conf/nginx.conf") search
for the ______ directive:
Information from NGINX site:
"NGINX writes information about client requests in the access log right after the request is processed. By default, the access log is located at logs/
access.log, and the information is written to the log in the predefined combined format. To override the default setting, use the log_format direct
ive to change the format of logged messages, as well as the access_log directive to specify the location of the log and its format. The log format
is defined using variables.
The following examples define the log format that extends the predefined combined format with the value indicating the ratio of gzip compression
of the response. The format is then applied to a virtual server that enables compression.
access_log path [format [buffer=size] [gzip[=level]] [flush=time] [if=condition]];
access_log off;
Default:
access_log logs/access.log combined;
log_format combined ''$remote_addr - $remote_user [$time_local] ''
''"$request" $status $body_bytes_sent ''
''"$http_referer" "$http_user_agent"'';
In XpoLog such pattern will be translated into:
{geoip:RemoteIP,ftype=remoteip} - {text:User,ftype=remoteuser} [{date:Date,dd/MMM/yyyy:HH:mm:ss z}]
"{text:RequestMethod,ftype=reqmethod} {text:RequestURL,ftype=requrl} {text:RequestProtocol,ftype=reqprotocol}"
{number:ResponseStatus,ftype=respstatus} {number:BytesSent,ftype=bytesent} "{text:Referer,ftype=referer}"
"{text:User-agent,ftype=useragent}"{eoe}
for more information see below:
Apache Https Access Log Format Conversion Table
logtyep should be set to: nginx, access
Field Appears Description XpoLog Pattern
as
$arg_name argument name in the
request line
$args arguments in the request line {text:QueryString,ftype=querystring}
$binary_remote_addr client address in a binary
form, value’s length is always
4 bytes for IPv4 addresses or
16 bytes for IPv6 addresses
$body_bytes_sent number of bytes sent to a {number:BytesSent,ftype=bytesent}
client, not counting the
response header; this
variable is compatible with
the “%B” parameter of
the mod_log_config Apache
module
$bytes_sent number of bytes sent to a {number:TotalBytesWHeadersSent,ftype=respbyteswheaders}
client (1.3.8, 1.2.5)
$connection connection serial number
(1.3.8, 1.2.5)
$connection_requests current number of requests
made through a connection
(1.3.8, 1.2.5)
$content_length “Content-Length” request
header field
$content_type “Content-Type” request
header field
$cookie_name the name cookie {string:CookieName}
$document_root root or alias directive’s value
for the current request
$document_uri same as $uri {text:RequestURL,ftype=requrl}
$host in this order of precedence:
host name from the request {text:ServerName,ftype=servername}
line, or host name from the
“Host” request header field,
or the server name matching
a request
$hostname host name {text:Remotehost,ftype=remotehost}
$http_name arbitrary request header {text:_,ftype=}
field; the last part of a
variable name is the field
name converted to lower
case with dashes replaced
by underscores
$https “on” if connection operates in
SSL mode, or an empty
string otherwise
$is_args “?” if a request line has
arguments, or an empty
string otherwise
$limit_rate setting this variable enables
response rate limiting; see li
mit_rate
$msec current time in seconds with
the milliseconds resolution
(1.3.9, 1.2.6)
$nginx_version nginx version
$pid PID of the worker process {text:ProcessID,ftype= processid}
$pipe “p” if request was pipelined,
“.” otherwise (1.3.12, 1.2.7)
$proxy_protocol_addr client address from the {ip: X-Forwarded-For,ftype=forwardforip}
PROXY protocol header, or
an empty string otherwise
(1.5.12)
The PROXY protocol must
be previously enabled by
setting
the proxy_protocol parameter
in the listendirective.
$proxy_protocol_port client port from the PROXY
protocol header, or an empty
string otherwise (1.11.0)
The PROXY protocol must
be previously enabled by
setting
the proxy_protocol parameter
in the listendirective.
$query_string same as $args {text:QueryString,ftype=querystring}
$realpath_root an absolute pathname
corresponding to the root or
alias directive’s value for the
current request, with all
symbolic links resolved to
real paths
$remote_addr client address {ip:RemoteIP,ftype=remoteip}
$remote_port client port {number:RemotePort,ftype=remoteport}
$remote_user user name supplied with the {text:User,ftype=remoteuser}
Basic authentication
$request full original request line {text:RequestMethod,ftype=reqmethod}
{text:RequestURL,ftype=requrl}
{text:RequestProtocol,ftype=reqprotocol}
$request_body request body
The variable’s value is made
available in locations
processed by the proxy_pass
, fastcgi_pass,uwsgi_pass,
and scgi_pass directives
when the request body was
read to a memory buffer.
$request_body_file name of a temporary file with
the request body
At the end of processing, the
file needs to be removed. To
always write the request
body to a file,client_body_in_
file_only needs to be
enabled. When the name of
a temporary file is passed in
a proxied request or in a
request to a
FastCGI/uwsgi/SCGI server,
passing the request body
should be disabled by the pr
oxy_pass_request_body off,
fastcgi_pass_request_body
off, uwsgi_pass_request_bod
y off, orscgi_pass_request_b
ody off directives,
respectively.
$request_completion “OK” if a request has
completed, or an empty
string otherwise
$request_filename file path for the current
request, based on the root or
alias directives, and the
request URI
$request_id unique request identifier
generated from 16 random
bytes, in hexadecimal
(1.11.0)
$request_length request length (including
request line, header, and
request body) (1.3.12, 1.2.7)
$request_method request method, usually {text:RequestMethod,ftype=reqmethod}
“GET” or “POST”
$request_time request processing time in
seconds with a milliseconds
resolution (1.3.9, 1.2.6); time
elapsed since the first bytes
were read from the client
$request_uri full original request URI (with {text:RequestURL,ftype=requrl}
arguments)
$scheme request scheme, “http” or {text:RequestProtocol,ftype=reqprotocol}
“https”
$sent_http_name arbitrary response header
field; the last part of a
variable name is the field
name converted to lower
case with dashes replaced
by underscores
$server_addr an address of the server
which accepted a request {ip:LocalIP,ftype=localip}
Computing a value of this
variable usually requires one
system call. To avoid a
system call, the listendirectiv
es must specify addresses
and use the bind parameter.
$server_name name of the server which {text:ServerName,ftype=servername}
accepted a request
$server_port port of the server which {number:ServerPort,ftype=serverport}
accepted a request
$server_protocol request protocol, usually {text:RequestProtocol,ftype=reqprotocol}
“HTTP/1.0”, “HTTP/1.1”, or “
HTTP/2.0”
$status response status (1.3.2, 1.2.2) {number:ResponseStatus,ftype=respstatus}
$tcpinfo_rtt, $tcpinfo_rttvar, $tcpinfo_snd_cwnd, $tcpinfo_rcv_space information about the client
TCP connection; available on
systems that support
the TCP_INFO socket option
$time_iso8601 local time in the ISO 8601
standard format (1.3.12,
1.2.7)
$time_local local time in the Common {date:Date,dd/MMM/yyyy:HH:mm:ss z}
Log Format (1.3.12, 1.2.7)
$uri current URI in request, norm {text:RequestURL,ftype=requrl}
alized
The value of $uri may
change during request
processing, e.g. when doing
internal redirects, or when
using index files.
$http_user_agent {text:User-agent,ftype=useragent}
$http_referer {text:Referer,ftype=referer}
Error Log
Look for the error_log logs/error.log warn; directive the nginx configuration file.YYYY/MM/DD HH:MM:SS [LEVEL] PID#TID: *CID MESSAGE
With PID and TID being the logging process and thread id and CID a number identifying a (probably proxied) connection, probably a counter. The
*CID part is optional.
debug, info, notice,warn, error, crit, alert, or emerg.
Default XpoLog Pattern:
{date:Date,yyyy/MM/dd HH:mm:ss} [{priority:Level,ftype=severity,debug;info;notice;warn;error;crit;alert;emerg}]
{text:PID,ftype=processid}#{text:TID,ftype=threadid}: {text:CID,ftype=connectionid} {string:Message,ftype=message}
WebSphere (Ver 6.0+)
Background
The WebSphere Server logs analysis App automatically Collect - Read - Parse - Analyzes - Reports all WebSphere machine generated log data
of the server and presents a comprehensive set of graphs and reports to analyze machine generated data. Use a predefined set of dashboards
and gadgets to visualize and address the system software, code written, and infrastructure during development, testing, and production. This
WebSphere logs analysis App helps measure, troubleshoot, and optimize your servers integrity, stability and quality with visualization and
investigation dashboards.
Steps
1. Add Log Data In XpoLog, When adding a log to XpoLog you can now set a Log Type (logtype). For WebSphere set the following logtypes
for each log:
a. System out - was,was-server,was-systemout
b. System err - was,was-server,was-systemerr
c. Server start - was,was-server,was-server-start
d. Server stop - was,was-server,was-server-stop
e. Native out - was,was-server,was-nativeout
f. Http error - was,was-server,http-error
g. Http access - was,was-server,access,w3c
2. In the WebSphere server configuration file, usually server.xml by default, located under the [SERVER_DIR]/config/.../[SERVER_NAME]
directory. Search for the following parameters:
a. System out - outputStreamRedirect
b. System err - errorStreamRedirect
c. Server start - outputStreamRedirect
d. Server stop - outputStreamRedirect
e. Native out - ioRedirect
f. Http error - enableErrorLogging
g. Http access - enableAccessLogging
3. Once the required information is set, on each log click next and edit the log pattern, this step is crucial to the accuracy and deployment of
the Linux App. Use the following patterns for each of the logs:
a. System out - Basic Information - [{date:Date,locale=en,MM/dd/yy HH:mm:ss:SSS z}] {text:Thread
ID,charsLength=8;ftype=threadid;,} {text:Short Name,charsLength=13;ftype=shortname;,} {map:Event
Type,ftype=severity;,F=FATAL;E=ERROR;W=WARNING;A=AUdit;I=INFO;C=CONFIGURATION;D=DETAIL;O=SYSTEM
OUTPUT;R=SYSTEM ERROR;Z=UNKNOWN}{block,start,emptiness=true}
{text:Class,ftype=class;stopPattern=^com\\.ibm\\.[\\w\\.]+(\\s);,} {text:Method,ftype=method;,}{block,end,emptiness=true}
{regexp:messagecode,refName=Message;ftype=messagecode,^\\s*([A-Z][A-Z][A-Z][A-Z]\\d\\d\\d\\d[EWI]):}{string:Message,ftype
=message;,}
b. System out - Advanced Information - [{date:Date,locale=en,MM/dd/yy HH:mm:ss:SSS z}] {text:Thread
ID,charsLength=8;ftype=threadid;,} {map:Event
Type,ftype=severity;,F=FATAL;E=ERROR;W=WARNING;A=AUdit;I=INFO;C=CONFIGURATION;D=DETAIL;O=SYSTEM
OUTPUT;R=SYSTEM ERROR;Z=UNKNOWN} UOW={text:UOW,ftype=uow;,}
source={text:Source,ftype=source;,}{block,start,emptiness=true} class={text:Class,ftype=class;,}
method={text:Method,ftype=method;,}{block,end,emptiness=true} org={text:Organization,ftype=organization;,}
prod={text:Product,ftype=product;,} component={text:Component,ftype=component;,} thread=[{text:Thread
Name,ftype=thread;,}]{regexp:messagecode,refName=Message;ftype=messagecode,^\\s*([A-Z][A-Z][A-Z][A-Z]\\d\\d\\d\\d[EWI]):}
{string:Message,ftype=message;,}
c. System err - [{date:Date,locale=en,MM/dd/yy HH:mm:ss:SSS z}] {text:Thread ID,charsLength=8;ftype=threadid;,} {text:Short
Name,charsLength=13;ftype=shortname;,} {map:Event
Type,ftype=severity;,F=FATAL;E=ERROR;W=WARNING;A=AUdit;I=INFO;C=CONFIGURATION;D=DETAIL;O=SYSTEM
OUTPUT;R=SYSTEM ERROR;Z=UNKNOWN} {string:Message,ftype=message;,}
d. System start - [{date:Date,locale=en,MM/dd/yy HH:mm:ss:SSS z}] {text:Thread ID,charsLength=8;ftype=threadid;,} {text:Short
Name,charsLength=13;ftype=shortname;,} {map:Event
Type,ftype=severity;,F=FATAL;E=ERROR;W=WARNING;A=AUdit;I=INFO;C=CONFIGURATION;D=DETAIL;O=SYSTEMd.
OUTPUT;R=SYSTEM ERROR;Z=UNKNOWN}{block,start,emptiness=true}
{text:Class,ftype=class;stopPattern=^com\\.ibm\\.[\\w\\.]+(\\s);,} {text:Method,ftype=method;,}{block,end,emptiness=true}
{regexp:messagecode,refName=Message;ftype=messagecode,^\\s*([A-Z][A-Z][A-Z][A-Z]\\d\\d\\d\\d[EWI]):}{string:Message,ftype
=message;,}
e. System stop - [{date:Date,locale=en,MM/dd/yy HH:mm:ss:SSS z}] {text:Thread ID,charsLength=8;ftype=threadid;,} {text:Short
Name,charsLength=13;ftype=shortname;,} {map:Event
Type,ftype=severity;,F=FATAL;E=ERROR;W=WARNING;A=AUdit;I=INFO;C=CONFIGURATION;D=DETAIL;O=SYSTEM
OUTPUT;R=SYSTEM ERROR;Z=UNKNOWN}{block,start,emptiness=true}
{text:Class,ftype=class;stopPattern=^com\\.ibm\\.[\\w\\.]+(\\s);,} {text:Method,ftype=method;,}{block,end,emptiness=true}
{regexp:messagecode,refName=Message;ftype=messagecode,^\\s*([A-Z][A-Z][A-Z][A-Z]\\d\\d\\d\\d[EWI]):}{string:Message,ftype
=message;,}
f. Native out - [{date:Date,locale=en,MM/dd/yy HH:mm:ss:SSS z}] {text:Thread ID,charsLength=8;ftype=threadid;,} {text:Short
Name,charsLength=13;ftype=shortname;,} {map:Event
Type,ftype=severity;,F=FATAL;E=ERROR;W=WARNING;A=AUdit;I=INFO;C=CONFIGURATION;D=DETAIL;O=SYSTEM
OUTPUT;R=SYSTEM ERROR;Z=UNKNOWN}{block,start,emptiness=true}
{text:Class,ftype=class;stopPattern=^com\\.ibm\\.[\\w\\.]+(\\s);,} {text:Method,ftype=method;,}{block,end,emptiness=true}
{regexp:messagecode,refName=Message;ftype=messagecode,^\\s*([A-Z][A-Z][A-Z][A-Z]\\d\\d\\d\\d[EWI]):}{string:Message,ftype
=message;,}
g. Http errro - [{date:Date,locale=en,EEE, dd MMM yyyy HH:mm:ss z}]
[{priority:Severity,ftype=severity;,DEBUG;INFO;WARN;ERROR;CRITICAL}] [{geoip:Client
IP,stopPattern=^[\d+:\.]+(:\d+/);ftype=remoteip;type=country:region:city}:{text:Remote Port,ftype=remoteport;,}/{text:Server
Host,stopPattern=^[\d+:\.]+(:\d+\]);ftype=localip;,}:{text:Server Port,ftype=localport;,}] {string:Message,ftype=message;,}
h. Http access - Basic Format - {geoip:Client IP,ftype=remoteip;type=;,} {string:Remote Logical Username,ftype=remoteuser;,}
{string:Remote User,ftype=remoteuser;,} [{date:Date,locale=en,dd/MMM/yyyy:HH:mm:ss z}]
\"{choice:Method,ftype=reqmethod;,GET;POST}
{string:URL,ftype=requrl;,}{block,start,emptiness=true}?{string:Query,ftype=querystring;,}{block,end,emptiness=true}
{string:reqprotocol,ftype=reqprotocol;,}\" {number:Status,ftype=respstatus;,} {number:Bytes Sent,ftype=bytesent;,}{eoe}
i. Http access - Combined Format - {geoip:Client IP,ftype=remoteip;type=;,} {string:Remote Logical
Username,ftype=remoteuser;,} {string:Remote User,ftype=remoteuser;,} [{date:Date,locale=en,dd/MMM/yyyy:HH:mm:ss z}]
\"{choice:Method,ftype=reqmethod;,GET;POST}
{string:URL,ftype=requrl;,}{block,start,emptiness=true}?{string:Query,ftype=querystring;,}{block,end,emptiness=true}
{string:reqprotocol,ftype=reqprotocol;,}\" {number:Status,ftype=respstatus;,} {number:Bytes Sent,ftype=bytesent;,}
\"{string:Referer,ftype=referer;,}\" \"{string:User Agent,ftype=useragent;,}\" \"{string:Cookie,ftype=cookie;,}\"{eoe}
System out Log Format Conversion Table
Format Description XpoLog Pattern XpoLog
String ftype
TimeStamp The timestamp is formatted using the locale of the process where it {date:Date,locale=en,MM/dd/yy HH:mm:ss:SSS z}
is formatted. It includes a fully qualified date (for example
YYMMDD), 24 hour time with millisecond precision and a time
zone.
ThreadId An 8 character hexadecimal value generated from the hash code of {text:Thread ID,charsLength=8;ftype=threadid;,} threadid
the thread that issued the message.
ShortName The abbreviated name of the logging component that issued the {text:Short Name,charsLength=13;ftype=shortname;,} shortname
message or trace event. This is typically the class name for
WebSphere Application Server internal components, but can be
some other identifier for user applications.
LongName The full name of the logging component that issued the message or {text:Source,ftype=source;,} source
trace event. This is typically the fully qualified class name for
WebSphere Application Server internal components, but can be
some other identifier for user applications.EventType A one character field that indicates the type of the message or trace {map:Event severity
event. Message types are in upper case. Possible values include: Type,ftype=severity;,F=FATAL;E=ERROR;W=WARNING;
A=AUdit;I=INFO;C=CONFIGURATION;
F D=DETAIL;O=SYSTEM OUTPUT;R=SYSTEM
ERROR;Z=UNKNOWN}
A Fatal message.
E
An Error message.
W
A Warning message.
A
An Audit message.
I
An Informational message.
C
An Configuration message.
D
A Detail message.
O
A message that was written directly to System.out by the user
application or internal components.
R
A message that was written directly to System.err by the user
application or internal components.
Z
A placeholder to indicate the type was not recognized.
ClassName The class that issued the message or trace event. {text:Class,ftype=class;,} class
MethodName The method that issued the message or trace event. {text:Method,ftype=method;,} method
Organization The organization that owns the application that issued the message {text:Organization,ftype=organization;,} organization
or trace event.
Product The product that issued the message or trace event. {text:Product,ftype=product;,} product
Component The component within the product that issued the message or trace {text:Component,ftype=component;,} component
event.
Managing Dashboards
An XpoLog Center Dashboard is a portal that contains gadgets. Multiple dashboards may be defined under an App context. The gadgets in the
dashboards are used to display visual or textual information from the logs that exist in the XpoLog environment.
Each gadget displays the data that the user requested to view in the gadget''s definition. For example, three gadgets can be displayed in a
dashboard for displaying search results, transactions list, and Analytics summary. Gadgets simplify and expedite performing searches and
operations on the log file. For example, instead of going each time to the Search engine and running a search, you can define gadgets for
viewing these search results in different visual manners.
XpoLog has an engine that enables customizing multiple dashboards, each for a different purpose. For example, you can define four
dashboards – for application problems, performance problems, network issues, and security.
Each dashboard can contain multiple gadgets, with each gadget displayed in one of the available visualizations: Line chart, Bar chart, Column
chart, Pie chart, Data table, Events list, etc. The gadgets can be organized within the dashboard in any of several predefined layouts. Also, any
gadget can be dragged and dropped to a preferred location on the dashboard page.
To create a Dashboard see Adding a Dashboard.
To see existing dashboards open the Apps console and click the App in which you wish to add/modify a dashboard.
The following icons which may be presented below a dashboard, indicate an important configuration of this dashboard:The icon
indicates that the dashboard is set as the system home page
The icon
indicates that the dashboard is scheduled to be exported
The icon
indicates that the dashboard is scheduled to be exported as part its parent App''s configuration. See App Settings.
The icon
indicates that the dashboard is running in an offline mode.
XpoLog enables management of a dashboard from menu entry of each dashboard. Mouse over a dashboard and click the
icon to display the menu options, as follows:
View – For opening the dashboard.
Close (optional) – For closing a dashboard if it is open (I.E. currently being displayed).
Edit – For defining the general settings of a dashboard - name, description, generation interval, time frame, export settings, etc. (See Das
hboard Settings)
Duplicate – For duplicating an existing dashboard and defining a new one on its basis.
Copy To – For copying an existing dashboard to another App and defining a new one on its basis.
Move To – For copying an existing dashboard to another App, without leaving the dashboard in the current App, and defining a new one
on its basis.
Export Conf. – For exporting a dashboard''s configuration (all settings) and allow an import of its definition in another XpoLog (See Expor
t / Import a Dashboard).
Delete – For removing an existing dashboard.
Click a dashboard to enter its view and gadgets administration options.
Adding a Dashboard
To add a new Dashboard to XpoLog:
1. In the main screen, click the Apps tab on the top left.
The Apps management console is displayed. Select an App or create one then click it to enter.
2. Click the ''Add New Dashboard'' entry in the menu on the left or the ''Add New Dashboard'' icon in the main screen.
Name the new Dashboard - a new dashboard is created.
3. Click the dashboard and add gadgets that visualizes data from the environment, or alternatively, load the dashboard by clicking the new
dashboard and then select the ''Add Gadget'' from the
icon on the top right hand side of the Dashboard toolbar. See Managing Gadgets.
4. To edit a dashboard, mouse over the dashboard and click the
icon, then click the ''Edit'' entry to display the Dashboard Settings or alternatively, load the dashboard by clicking the new dashboard and
then select the ''Edit Dashboard'' from the
icon on the top right hand side of the Dashboard toolbar.
Removing a Dashboard
To remove a Dashboard from XpoLog:
1. In the main screen, click the Apps tab on the top left.
The Apps management console is displayed. Select an App and click it to enter.
2. Mouse over the dashboard and click the
icon, then click the ''Delete'' entry and confirm the operation - the Dashboard will be deleted.
Dashboard Settings
The dashboard settings contain different settings on the dashboard level. Unless changed individually, Gadgets inherit the dashboard''s settings.
There are several options to open the general settings of a dashboard:
1. In the dashboards screen, mouse over a dashboard and click the
icon to displays the menu items and select Edit Dashboard.
2.2. Click a dashboard to load it and then select the ''Edit Dashboard'' from the
icon on the top right hand side of the Dashboard toolbar.
Dashboard''s general settings screen is opened.
General
The general settings section allows to configure the Name and Description of a dashboard
Time Settings
The time settings section allows to configure the following:
Time Range: the default dashboard time range which all gadgets will display by default unless configured individually otherwise.
Time Range which is set to Live determines a real time execution of the dashboard - gadgets will not be generated in the background,
results will be calculated and displayed in real time only and while the dashboards is opened.
Generation Frequency: the frequency that new data will be processed and displayed in the gadgets.
Generation frequency which is set to Never determines an offline mode of the dashboard - gadgets will be generated only on the
exported time definition and/or on demand.
Dashboard Sources
The dashboard sources enables a generic definition of the dashboard''s logs/folder/servers sources which will be added to all used search queries
in the gadgets.
For example if the dashboard should refer to servers x, y, z then it is possible to specify this directly in each gadget''s search query. Alternatively, it
is possible to use generic queries in the gadgets and specify the list of sources in the Dashboard Sources section.
Using Dashboard Sources makes it very simple to duplicate, maintain and manage the list of sources that will be analyzed by the dashboard.
User Inputs
Users Inputs provide an interface for users to supply values that effect gadgets search terms and displayed results based on their selection.
Typically, the inputs are displayed in a checkbox, text area, dropdown menus or radio buttons.
The forms allow users to visually make selections which impact the underlying searches and focus only on points of interest while viewing
dashboard''s results.
There are 2 aspects the should be configured in order to use Inputs -
General Definition (Dashboard''s Settings section)
In the Dashboard Settings section the general settings of the inputs are configured - type of input, input key, label, etc. Based on these definitions the user inputs form at the top of a dashboard will be built and displayed.
Searches Definition (Gadget''s underlying searches)
It is also mandatory to use inputs keys within the gadgets underlying searches so in case a user uses an input the correspondent gadget
will be updated based on the selection.
Export Settings
The export settings section determines if/when to export this dashboard.
Exporting Frequency: a frequency, a specific time or several times in which the dashboard will be exported according the Export to Email
/ Export to File definition.
Exporting:
The export mechanism supports an export of all selected dashboards as PDF and/or CSV files by email and/or by saving a file in the
specified location.
Export To Email: the email settings that will be used when exporting the dashboard by email (for multiple recipients use a
comma separated list).
Export To File: the format, location and retention settings that will be used when exporting the dashboard to files on the file
system.
Note: no images are exported in a CSV formatted file.
XpoLog dashboards support definition of multiple export schedulers. Using more than one export scheduler allows a configuration of a sp
ecific date range and specified user inputs for each scheduled export. For example it is possible to configure on the same
dashboard an export scheduler once a week on the last week in a daily granularity (per day view) and another scheduler once a day on
the last day on an hourly granularity (per hour view) - the result will be a daily export presenting the last day on an hourly basis and a
weekly export presenting the last week on a daily basis.
User Inputs - General Definition
Users Inputs provide an interface for users to supply values that effect gadgets search terms and displayed results based on their selection.
Typically, the inputs are displayed in a checkbox, text area, dropdown menus or radio buttons.
The forms allow users to visually make selections which impact the underlying searches and focus only on points of interest while viewing
dashboard''s results.
Configuring User Inputs:
The first part to configure inputs is done in the Dashboard''s Settings section. Edit the Dashboard and you''ll find a User Inputs section.
In the User Inputs section, Administrators define the list of Inputs to be available while viewing a dashboard. Each input has several settings
General Settings:
- Input Key - The key is the unique identifier of an input in which is used in the underlying search within a gadget. Upon selection of a value in the
input form in the dashboard, the selected value will be integrated in the query or queries that contains this key
Display Settings:
- Title - The title of the input that is displayed in the inputs form above this input
- Description - The input''s description
- Visible - Determines whether this input is displayed in the inputs form or not
- Break Line - Determines whether this input starts a new line in the inputs form or not
- Advanced - Determines if this input should be presented in the main inputs form (default) or should be displayed only in the advanced section of
the inputs form
Input Settings:
- Input Type:
Text
Default Value - The value that will be used by default (leave empty for empty default)
Placeholder - Text that will be displayed within the input text area to imply the user what the optional values are List
Multiple Selection - Determines whether it is possible to select more than one value of the list or not
Show as Drop Down - Determines whether the list should be display an horizontal values list or as a dropdown menu
List Type - Determines the type of list of this input:
Static - A static list of values entered by the administrator
Query Based - A dynamic list which is a result of a search query
Sources Based - A dynamic list of sources available in XpoLog (Logs, Folders, AppTags, Servers)
Predefined - Upload a key=value type of file which contains a list of values that will be displayed in the input
Checkbox
Checked by Default - Allow a true/false type of input
Checkbox Label - The label to be displayed next to the checkbox
After completing the configuration and saving it, the configured inputs will be displayed at the upper part of the dashboard. However, using the
inputs will have no effect at this point until the second part of configuring the gadgets underlying searches to use the inputs is completed.
User Inputs - Searches Definition
Users Inputs provide an interface for users to supply values that effect gadgets search terms and displayed results based on their selection.
Typically, the inputs are displayed in a checkbox, text area, dropdown menus or radio buttons.
The forms allow users to visually make selections which impact the underlying searches and focus only on points of interest while viewing
dashboard''s results.
Searches in the Gadgets within a dashboard
After the inputs are defined in the dashboard settings it is mandatory to configure the searches of the gadgets to use them in order to update a
gadget''s result upon an input selection.
The syntax which is used in the search queries is [XI:INPUT_KEY] - this value will be replaced by XpoLog when a user makes a selection in the
inputs forms and clicks Apply, INPUT_KEY stands for the input key configured in the dashboard settings inputs section.
For example:
1. In the dashboard settings create a Granularity input with a key ''interval''. Set it as a list of static values (1 second, 1 minute, 1 hour
(default), etc.) and save:1.
2. In the dashboard itself you will see a drop down menu with the values of the Granularity input (with the default selected).
Note, at this point the gadget will not react to selections made on the input as the input key is not combined with the search query yet:
3. Edit the gadget and combine the interval XpoLog Input as part of the query and save:3.
* | count | interval [XI:interval] | display count as events over time
Notice the [XI:interval] which will be used when a selection of the input''s value will be made.
4. A selection of Granularity ''Minutes'' and applying will display a different view of the gadget:
User Inputs - Example
The following example demonstrates the step to add sources, granularity and search term to a gadget using User Inputs:
1. Step I: Sources Input
The following input retrieves a list of logs defined in XpoLog under the ''XpoLog System Log'' folder to be selected and combined in the
search:1.
Result in Dashboard:
Query Used:1.
* in [XI:sources]| count | interval 1 hour | display count as events over time
Upon selection the query will run only on the selected log(s).
2. Step II: Granularity Input
The following input displays a list of granularity options to be selected and combined in the search:
(Note if a default is not selected, the gadget will wait to a selection to display the result)
Result in Dashboard:2.
Query Used:
* in [XI:sources]| count | interval [XI:interval] | display count as events over time
Upon selection of sources the query will run only on the selected log(s) and upon selection of granularity the result will be displayed in the
specified granularity
3. Step III: Search Term Input
The following input displays a search term to be combined in the search if entered:
(Note if a default is not specified, the gadget will use * as the search term)3.
Result in Dashboard:
Query Used:
[XI:search] in [XI:sources]| count | interval [XI:interval] | display count as events over time
Upon entering a search term it will be used in the query, upon selection of sources the query will run only on the selected log(s) and upon
selection of granularity the result will be displayed in the specified granularity
Result:
The dashboard will now be loaded with the default values and will display the list of User Inputs to provide users with the ability to modify
parameters and reload dashboard based on their selection.
For example - here''s a view of the same dashboard on the last 1 hour, on a specific log (xpologlog) in Minutes interval and searching for only
''error or fail*'' in that logs:
Dashboards Options
You can manage a Dashboard in XpoLog by using the options accessible by clicking the
icon on the top right hand side of the Dashboard toolbar.
Available dashboard options are:
Add Gadget – For adding a gadget to the dashboard. See Managing Gadgets in Administrator''s Guide.
Edit Dashboard – For Editing the general settings of a dashboards. See Dashboard Settings in the Administrator''s Guide.
Save Layout – If the Dashboard''s layout was modified, click this option to save it as the default layout of this dashboard.
Reset Layout - For resetting a layout back to its default in the current display.
Export to PDF – For exporting the dashboard to a PDF file (see Exporting the Dashboard to a PDF/CSV).
Export to CSV – For exporting the dashboard to a CSV file (see Exporting the Dashboard to a PDF/CSV).Set as Home Page – For details, see Setting a Dashboard as Homepage in Administrator''s Guide.
Copy Permalink – This option copies to the clipboard a direct link to the dashboard. The link can be then used externally to XpoLog to
present the dashboard''s result (for example in an iFrame in an external portal). 2 parameters which can be added to the link:
Login credentials - mandatory in case Security is active in XpoLog, a user and password with the credentials to view the
dashboard should be added: &autoLogin=true&username=[USER_NAME]&password=[PASSWORD]
Enable Zoom - optional, dashboard/gadgets contain links to zoom in back to XpoLog to see the result, by default the
zoom in links are presented. It is possible to add a parameter which determines this behavior: &allowZoom=false
or &allowZoom=true
Display in Black Theme - optional, by default the permalinks of dashboards will display the dashboards in white theme. It
is possible to add a parameter which set it to black theme: &blackTheme=true
Setting a Dashboard as Homepage
You can set a dashboard to appear on the homepage of XpoLog Center.
To set a dashboard as homepage:
1. Open the dashboard that you want to set as homepage, and click the
icon on the top right hand side of the Dashboard toolbar.
2. Click the Set as Home Page.
The Set Home Page dialog box opens.
3. Select the Default home page checkbox, and then click Save.
The dashboard is set as the XpoLog homepage.
Note: each user may define a specific dashboard to be the home page. The system Administrator should define the dashboard which will be the
default home page of XpoLog.
Export / Import a Dashboard
Exporting a Dashboard Configuration
Exporting a dashboard creates a zip file with the entire dashboard''s configuration to enable future import in another XpoLog.
In the dashboards console under an App, mouse over a dashboard and click the
icon to displays the menu items and select Export Conf.
A zip file of the dashboard''s configuration is created.
Importing a dashboard Configuration
Importing a dashboard enables a fast creation of a dashboard with its definition. To import a dashboard:
In the main Apps screen, click the App to which the dashboard should be imported to. on the left hand side, click the ''Import Dashboard Conf.'' ,
select the dashboard''s configuration zip file and click the Import Dashboard button.
A new dashboard is created.
Managing Gadgets
Gadgets are the data visualization units which are displayed in dashboards. Multiple gadgets can be added to a single dashboard to visualize
data in many forms and shapes such as line chart, area chart, bar chart, column chart, pie chart, events list or table, analytics summary,
transactions list and more.
Adding a gadget is done from within a dashboard. After clicking a dashboard to load it, select the ''Add Gadget'' from the
icon on the top right hand side of the Dashboard toolbar.
The Add a Gadget administration console opens:
Click one of the Visual Types to filter the list of available gadgets to that specific type or type the name in the search box
to filter the list.
By clicking the image of the selected type, the gadget''s administration screen will appear for defining the gadget.
To add a gadget to a dashboard:
1. Create a dashboard or load an existing dashboard to which you want to add a gadget, and select the ''Add Gadget'' from the
icon on the top right hand side of the Dashboard toolbar.
The Select Gadget page opens, displaying the available gadget types.
2. Click a gadget type.
A page opens for defining the information for the new gadget. Fill in the fields of the selected gadget type:
For a Line Chart gadget, see Adding a Line Chart Gadget
For a Pie Chart gadget, see Adding a Pie Chart Gadget
For a Column Chart gadget, see Adding a Column Chart Gadget
For a Bar Chart gadget, see Adding a Bar Chart Gadget
For an Area gadget, see Adding an Area Gadget
For a Scatter gadget, see Adding a Scatter Chart Gadget
For a Gauges gadget, see Adding a Gauge Gadget
For a Table gadget, see Adding a Table Gadget
For an Events gadget, see Adding an Events Gadget
For a Map gadget, see Adding a Map Gadget
For a Transactions gadget, see Adding a Transactions Gadget
For an Analytics gadget, see Adding an Analytics GadgetGadget Definition Principles
The definition screen of gadgets may slightly change based on the gadget type that was selected, however the principles of the definition are
similar.
Below is an example of a 3D Pie Chart gadget definition screen that details the required information for generating the result:
Title - enter a title for the gadget
Search Query - enter the simple / complex search query that will be used by this gadget
Group By - the result will be aggregated based on this selection (this option is disabled if entering a complex search query)
Time Range - the time range that this gadget will generate its result on (by default gadgets inherit their parent dashboard''s time range
definition)
Max Number of Results - specify the max number of results to be displayed
More Settings
Pie Type - specify the pie type (in case of a pie - regular pie, donut or semi circle donut)
3D - specify if the selected type should be presented in 3D or 2D
Note: at any given time it is possible to click the ''Change gadget type'' and select a different type using the used definitions.
Results Example:
Adding a Line Chart Gadget
Displays a line/spline chart showing a count over time or label of log events matching a given simple/complex Search query; gadget has a View in
Search link that can be clicked to navigate to the Search Console to perform a drill-down.
To add a Line Chart Gadget:
1. In Title, type a name for the gadget.
2. In Search Query, type the search query to run, based on the Search simple/complex syntax.
3. In Time Range, select the time frame following which the gadget display is to be refreshed.
4. In X-Axis, select the type of X-Axis to be displayed:Time or Label.
5. In Group By, select the grouping dimension of the result: None, Log, Application, or Server.
6. If available, click on More Settings in order to specify specific visualization options for this gadget.
7. Click the Save button.
The gadget is saved in the dashboard.
Note: in the settings of some of the line charts gadgets it is possible to add plot bands which highlights a certain line in a selected color or paints
an area:
Results Example:
Adding an Area Chart Gadget
Displays an area chart showing a count over time or label of log events matching a given simple/complex Search query; gadget has a View in
Search link that can be clicked to navigate to the Search Console to perform a drill-down.
To add an Area Chart Gadget:
1. In Title, type a name for the gadget.
2. In Search Query, type the search query to run, based on the Search simple/complex search syntax.
3. In Time Range, select the time frame following which the gadget display is to be refreshed.
4. In X-Axis, select the type of X-Axis to be displayed:Time or Label.
5. In Group By, select the grouping dimension of the result: None, Log, Application, or Server.
6. If available, click on More Settings in order to specify specific visualization options for this gadget.
7. Click the Save button.
The gadget is saved in the dashboard.
Results Examples:Adding a Column Chart Gadget
Displays a column/stacked column chart showing a count over time or label of log events matching a given simple/complex Search query; gadget
has a View in Search link that can be clicked to navigate to the Search Console to perform a drill-down.
To add a Column Chart Gadget:
1. In Title, type a name for the gadget.
2. In Search Query, type the search query to run, based on the Search simple/complex syntax.
3. In Time Range, select the time frame following which the gadget display is to be refreshed.
4. In X-Axis, select the type of X-Axis to be displayed:Time or Label.
5. In Group By, select the grouping dimension of the result: None, Log, Application, or Server.
6. In Max Number of Results, select the maximum number of events to be returned.
7. If available, click on More Settings in order to specify specific visualization options for this gadget.
8. Click the Save button.
The gadget is saved in the dashboard.
Results Examples:Adding a Bar Chart Gadget
Displays a bar/stacked bar chart showing a count over time or label of log events matching a given simple/complex Search query; gadget has a Vi
ew in Search link that can be clicked to navigate to the Search Console to perform a drill-down.
To add a Bar Chart Gadget:
1. In Title, type a name for the gadget.
2. In Search Query, type the search query to run, based on the Search simple/complex syntax.
3. In Time Range, select the time frame following which the gadget display is to be refreshed.
4. In X-Axis, select the type of X-Axis to be displayed:Time or Label.
5. In Group By, select the grouping dimension of the result: None, Log, Application, or Server.
6. In Max Number of Results, select the maximum number of events to be returned.
7. If available, click on More Settings in order to specify specific visualization options for this gadget.
8. Click the Save button.
The gadget is saved in the dashboard.
Results Examples:Adding a Stacked Grouped Column Chart
Displays a stacked grouped column chart showing a count over time of log events matching a given simple/complex which are associated to
another group; gadget has a View in Search link that can be clicked to navigate to the Search Console to perform a drill-down.
To add a Stacked Grouped Column Chart Gadget:
1. In Title, type a name for the gadget.
2. In Search Query, type the search query to run, based on the Search simple/complex syntax. This gadget requires 2 group by values one
which will be used a the base category and the other which will be displayed as a stack on top of it.
3. In Time Range, select the time frame following which the gadget display is to be refreshed.
4. In X-Axis, the type of X-Axis will be Time.
5. In Group By, select the grouping dimension of the result: None, Log, Application, or Server.
6. In Category Fields, enter one of the groups names (used in the query ''group by'') to be used as the base category.
7. If available, click on More Settings in order to specify specific visualization options for this gadget.
8. Click the Save button.
The gadget is saved in the dashboard.
Results Examples:
Adding a Pie Chart Gadget
Displays a pie/donut chart showing a count over time or label of log events matching a given simple/complex Search query; gadget has a View in
Search link that can be clicked to navigate to the Search Console to perform a drill-down.
To add a Pie Chart Gadget:
1. In Title, type a name for the gadget.
2. In Search Query, type the search query to run, based on the Search simple/complex syntax.
3. In Group By, select the grouping dimension of the result: Log, Application, or Server.
4.4. In Time Range, select the time frame following which the gadget display is to be refreshed.
5. In Max Number of Results, select the maximum number of events to be returned.
6. If available, click on More Settings in order to specify specific visualization options for this gadget.
7. Click the Save button.
The gadget is saved in the dashboard.
Results Examples:
Adding a Scatter Chart Gadget
Displays a scatter chart showing a count over time or label of log events matching a given simple/complex Search query, grouped by log,
application, or server; gadget has a View in Search link that can be clicked to navigate to the Search Console to perform a drill-down.
To add a Scatter Chart Gadget:
1. In Title, type a name for the gadget.
2. In Search Query, type the search query to run, based on the Search simple/complex search syntax.
3. In Time Range, select the time frame following which the gadget display is to be refreshed.
4. In X-Axis, select the type of X-Axis to be displayed:Time or Label.
5. In Group By, select the grouping dimension of the result: None, Log, Application, or Server.
6. If available, click on More Settings in order to specify specific visualization options for this gadget.
7. Click the Save button.
The gadget is saved in the dashboard.
Results Examples:
Adding a Heat Map Gadget
Displays a Heat Map chart showing a count over time or label of log events matching a given simple/complex Search query; gadget has a View in
Search link that can be clicked to navigate to the Search Console to perform a drill-down.
To add a Line Chart Gadget:
1. In Title, type a name for the gadget.
2. In Search Query, type the search query to run, based on the Search simple/complex syntax.
3. In Time Range, select the time frame following which the gadget display is to be refreshed.
4. In X-Axis, select the type of X-Axis to be displayed:Time or Label.
5. In Group By, select the grouping dimension of the result: None, Log, Application, or Server (available if a simple search query is used).
6. If available, click on More Settings in order to specify specific visualization options for this gadget:
a. Set Color Labeling Theme - 2 colors / 3 colors and distribution of the results in the Heat Map.
b. Category Fields: if X-Axis is set to display label, enter the category in which will populate the X-Axis.
7. Click the Save button.
The gadget is saved in the dashboard.
Adding a Gauge Gadget
Displays a gauge showing a result of a given simple/complex Search query; gadget has a View in Search link that can be clicked to navigate to
the Search Console to perform a drill-down.
To add a Gauge Chart Gadget (Speedometer/VU Meter/Solid):
1. In Title, type a name for the gadget.
2. In Search Query, type the search query to run, based on the Search simple/complex syntax.
3. In Time Range, select the time frame following which the gadget display is to be refreshed.
4.4. Click on More Settings in order to specify specific min and max values for this gadget, label and plot bands colors for the different level
between min and max.
5. Click the Save button.
The gadget is saved in the dashboard.
Results Examples:
Adding a Table Gadget
Displays a search result table showing a count of log events matching a given simple/complex Search query, grouped by log, application, or
server; gadget has a View in Search link that can be clicked to navigate to the Search Console to perform a drill-down.
To add a Table Gadget:
1. In Title, type a name for the gadget.
2. In Search Query, type the search query to run, based on the Search simple/complex syntax.
3. In Display Columns, type the names of the columns to be displayed separated by commas, or leave blank to display all columns.
4. In Time Range, select the time frame following which the gadget display is to be refreshed.
5. In Group By, select the grouping dimension of the result: None, Log, Application, or Server.
6. In Max Number of Results, select the maximum number of events to be returned.
7. Click the Save button.
The gadget is saved in the dashboard.
Result Example:Adding an Events Gadget
Displays a search result events showing showing log events matching a given Search query; gadget has a View in Search link that can be
clicked to navigate to the XpoSearch search engine to perform a drill-down.
To add an Events Gadget:
1. In Title, type a name for the gadget.
2. In Search Query, type the search query to run, based on the Search simple/complex syntax.
3. In Display Columns, type the names of the columns to be displayed separated by commas, or leave blank to display all columns.
4. In Time Range, select the time frame following which the gadget display is to be refreshed.
5. In Max Number of Events, select the maximum number of events to be returned.
6. Click the Save button.
The gadget is saved in the dashboard.
Result Example:
Adding a Map Gadget
Displays a Geo IP map of the result of a given search, grouped by countries or cities; used to find the city or country of an IP address in a log
record; gadget has a View in Search link that can be clicked to navigate to the Search Console to perform a drill-down.
To add a Map Gadget:
1. In Title, type a name for the gadget.
2. In Search Query, type the search query to run, based on the XpoSearch simple/complex search syntax.
3. In Time Range, select the time frame following which the gadget display is to be refreshed.
4. In Group Type, select the type of chart to be displayed: Countries or Cities.
5.5. In Max Number of Events, select the maximum number of events to be returned.
6. Click the Save button.
The gadget is saved in the dashboard.
Result Examples:
Adding a Google Map
Displays a Geo IP map of the result of a given search based on Google Maps, grouped by countries or cities; used to find the city or country of an
IP address in a log record; gadget has a View in Search link that can be clicked to navigate to the Search Console to perform a drill-down.
To add a Google Map Gadget:
1. In Title, type a name for the gadget.
2. In Search Query, type the search query to run, based on the XpoSearch simple/complex search syntax.
3. In Time Range, select the time frame following which the gadget display is to be refreshed.
4. In Group Type, select the type of chart to be displayed: Countries or Cities.
5. If available, click on More Settings in order to specify specific visualization options for this gadget:
a. Map Center - by Data or by Specific location (determines the initial view when first loaded).
b. Visualization - type of visualization to be displayed in the gadget.
c. Set Color Theme - Heat Map style based on value, or one color.
d. Set Color Labeling Theme - 2 colors / 3 colors and distribution of the results in the Map.
6. Click the Save button.
The gadget is saved in the dashboard.
Note: Google Maps require internet connection - view and export by users requires connection only on the client side, scheduled export requires
connection from the server.
Result Examples:Adding a Transactions Gadget
Displays transactions list matching a given search; gadget has a View in Search link that can be clicked to navigate to the Search Console
to perform a drill-down
To add a Transactions Gadget:
1. In Title, type a name for the gadget.
2. In Search Query, type the search query to run, based on the XpoSearch simple/complex search syntax.
3. In Time Range, select the time frame following which the gadget display is to be refreshed.
4. In Max Number of Transactions, select the maximum number of transactions to be returned.
5. Click the Save button.
The gadget is saved in the dashboard.
Result Example:
Adding an Analytics Gadget
Displays log top problems and server problems detected by Analytics; gadget has a View in Analytics link that can be clicked to navigate to the
Analytics Console to perform a drill-down.
To add an Analytics Gadget:
1. In Title, type a name for the gadget.
2.2. In Time Range, select the time frame following which the gadget display is to be refreshed.
3. Specify the Data Sources to be included in the Analytics view (server/folder/application/logs).
4. Click the Save button.
The gadget is saved in the dashboard.
Result Examples:
Search Administration
By default, all the logs in XpoLog are being indexed. The indexing process enables extremely fast search on the log data and is highly
recommended.
If you do wish to disable indexing on selected logs, it is possible under XpoLog Search > Administration.
Note: Logs which are not indexed will not be available for search in the Search console, their data will be collected and available in the Log
Viewer only.
Analytics Administration
XpoLog Analytics is an automatic Log Analysis and Monitoring console, which automatically scans all logs that enter
the system for errors, risks, statistical problems, and predefined rules. Its Problem Analysis dashboard generates
dynamic reports on the detected errors, maps problems over time, and tagging them according to their severity. From
the Problems Analysis dashboard, users have immediate access to the analysis reports, with easy navigation and
zoom-in capabilities to the relevant log data to accelerate problems isolation.
XpoLog''s Analytics console analyzes log data for the following two types of problems:
Predefined Errors – Detects problems that have been predefined as a saved search. Severity can be assigned to saved
searches in XpoLog Search. Once a severity is assigned to a saved search, it will be presented in theAnalytics console as a predefined problems.
Auto-Detected Errors – Uses Semantic Content Analysis. Based on semantic analysis of the logs'' contents and predefined
knowledgebase, XpoLog Analytics detects in the logs thousands of errors and events that contain information related to a fault (for
example, events containing the word failure or error). Analytics immediately generates a very high percentage of the problems in the logs
of any application, without any configuration.
If activated, Servers Metrics Analysis displays the CPU, memory, and disk problems on the source servers from which the logs originated. The
problems definition for metrics can be easily customized to meet the environmental constraints.
In addition, the Analytics console runs statistical analysis on multiple use cases to identify unusual behavior in the Application logs.
Problems such as high/low logging activity, applications/servers that stop logging normally, an IP that extensively calls the same URL, are
captured and presented automatically.
Analytics Settings
Under the Analytics Settings section (Analytics > Administration > Settings) the following options are available:
1. General
Customize the Analytics console default view:
- View By: presenting the analysis based on Folders and Logs, Applications, or Server.
- View Type: Total - presenting an analysis which summarizes the total number of log events and problems that were scanned on each
time slot. Risk - presenting an analysis which summarizes the severities of problems that were scanned on each time slot.
- Group By: presenting an aggregated analysis of all logs (Summary View) or per each Folder/Application/Server independently (Split).
- Hierarchy Type: presenting an analysis in the defined hierarchy of Folders/Applications/Servers (Hierarchic) or simply list each
log independently (Flat).
- Show Metrics: chekc this to present the metrics (CPU/Memory/Disk) measurements in the console (see below how to activate metrics
measurements).
- Dates Range: the default time frame that the Analytics console will load when first entering it.
Note: the above settings determine the default view of the Analytics console when first entering it. The view may be easily changed by
users while viewing the Analytics User Interface.
2. Problems Management
All Severity changes that were to automatically detected problems by users are centralized in this tab. System Administrators may
customize or reset these changes as required.
3. Metrics
The Analytics console can measure servers metrics and present it side by side to the logs analysis. The Analytics measures metrics from
servers over SSH and over the Windows network. Metrics information (CPU/Memory/Disk) is stored for as long as needed and presents
historical view of servers CPU/Memory/Disk levels over time.
The control of the metrics sampling interval and problems definition of metrics analysis can be found under the Metrics tab.
Note: In order to define a specific metrics policy (sampling interval and problems definition) to a server, you may override the global
definition by going to: Analytics > Administration > Logs, select servers view and then right click the server you wish to edit and click the
metrics link.
Troubleshooting Analytics Metrics
To verify the status of the capability of XpoLog to measure CPU/Memory/Disk Space (Metrics) of a remote server in
XpoLog''s Analytics go to '' Analytics -> Administration -> Logs'' and select ''View by Servers'', click a server/right click
a server and click "Verify Connection" to check the connectivity and get a metrics measurements validation.
Windows machines:
1) RPC server is unavailable / Access is denied
- Problem description: Cannot access the RPC server on a remote computer.
- Probable cause: This can be caused by the Windows Firewall service and Distributed Component Object Model (DCOM).
- Solution: To allow connection to the remote computer:
a. Make sure that the used host name is valid and that the server is running.
b. Ensure that the user account that XpoLog uses is an administrator on the remote computer.
If the XpoLog user account is not an administrator on the remote computer, but the user account has Remote Enable permission on the remote
computer, then the user must also be given DCOM Remote Launch and Remote Activation privileges on the remote computer by running
Dcomcnfg.exe at the command prompt.
c. Allow for remote administration on the remote computer.
You can use either the Group Policy editor (Gpedit.msc) or a script to enable the Windows Firewall: Allow remote administration exception, or use
a NETSH firewall command at the command prompt to allow for remote administration on the remote computer.
The following command enables this feature:
netsh firewall set service RemoteAdmin enable
If you would rather use the Group Policy editor than the NETSH commands above, use the following steps in the Group Policy editor (Gpedit.msc)
to enable "Allow Remote Administration" on the remote computer. - Under the Local Computer Policy heading, double-click Computer Configuration.
- Double-click Administrative Templates, Network, Network Connections, and then Windows Firewall.
- If the computer is in the domain, then double-click Domain Profile; otherwise, double-click Standard Profile.
- Click Windows Firewall: Allow remote administration exception.
- On the Action menu, select Properties.
- Click Enable, and then click OK.
For more information, go to http://msdn.microsoft.com/en-us/library/aa389286(VS.85).aspx
2)CScript is not recognized
- Problem description: The dynamic link library (DLL) VBScript.DLL is not properly installed on your computer.
- Probable cause: It is likely this file (VBScript.DLL) is missing
- Solution: run "regsvr32 VBScript" in a Command Prompt.
UNIX machines:
1) Command not found
- Problem description: One or more of the following commands executed on the machine could not be found -
"date", "top", "df", "free"
- Solution: Ensure permissions are given to the user XpoLog is using for these commands. If required, install the command/s or the missing
package or consult with your system administrator.
2) Account is missing
- Problem description: Failed to find the account associated with a host.
- Solution: Make sure that the account associated with this the host is defined properly:
Go to ''Analytics -> Administration -> Logs'' and select ''View by Servers''
Select the server and click ''Verify Connection''. If the verification fails, click ''Edit Server'' and select the ''Metrics'' tab.
Make sure the ''Connection Details'' is set. If not, click ''new'' and enter new connection details.
Click ''Save'' in the account page to save the newly defined account and then ''Save'' in the server''s edit window to save the server definitions.
3) Date command / format is missing
- Problem description: Failed to get date from the host.
- Solution: Contact your XpoLog administrator or XpoLog Support to verify XpoLog''s configuration.
4) Failed to fetch
- Problem description: The metrics data (cpu, memory or disk space) could not be fetched
- Solution: Contact your XpoLog administrator or XpoLog Support to verify XpoLog''s configuration.
5) Server type is unknown
- Problem description: May be caused when a server has a "NA" as the server type (like a server that has been created through a database
account)
- Solution: Edit the server and set a proper connectivity account so that XpoLog can use to measure metrics, save it and verify that the connection
is valid.
For more information please contact XpoLog Support
Changing a Severity of a Problem
The Analytics engine presents two types of problems Predefined and Auto-Detected.
Predefined problems are saved searches that a severity was assigned on, to change the severity please see Editing a Saved Search
Auto-Detected problems are problems which were automatically detected by the Analytics engine. In the Analytics console at the
bottom part ''Most Severe Problems'' there''s a menu on the right hand side next to each problem which allows a customization of the
problem:
Change a severity of a problem
Exclude a problem from the future analyses (note: if you select this option, it will be used on future analyses and will not be removed from
the existing analysis)
To see all changes that were made on Analytics problems see Analytics Settings.
Managing Analytics Members
By default, all the logs are being analyzed by the Analytics engine. It is possible to customized which logs will be analyzed and which will be
disabled under Analytics > Administration > Logs:
Select the view type - Folders and Logs, Applications, or Server
Select the Folder/Log/Application/Server and click the Enable / Disable button
Note: for Servers view, in case servers metrics is enabled in the system, you will be asked to enable/disable both Logs analysis originated from
the selected server and the server metrics.
Monitors Administration
XpoLog comes with a built-in monitoring engine that enables you to monitor logs data and get alerts when defined criteria is met. The monitorsconsole presents all defined monitors, their last execution time, and their last status (failure = matching events were detected in the last execution
and alerts were sent, success = matching events were not found in the last execution and alerts were not sent).
Alert Types
The monitors can be automated, and send alerts in various forms:
Email - sends an email alerts to a list of users (make sure you have configured the required mail settings in XpoLog).
Email Alert Advanced options
Data Attachment it is possible to add to the email alert the following:
Append event to end of email body: add to the email body the latest log event that triggered the alert in the
current execution
Attach a dashboard: attach to the email one of the existing Dashboards
Attach matched events as: attach to the email all the records which triggered the alert in the current execution
as a files from one of the available types CSV / Tab Delimited / XML
Check to zip the attached file: in case ''Attach matched events as'' is checked - determine whether
the attachment will be zipped or not.
From Email Address it is possible to customize the ''From'' email address (by default the system email address will be
used).
SNMP Traps - sends a SNMP trap (make sure you have configured the required SNMP account in XpoLog).
JMS Messages - sends a JMS message (make sure you have configured the required JMS account in XpoLog).
Custom Scripting - open mechanism which executes any script as part of the monitor''s failure.
Custom Scripting Details: it is possible to export all the records which triggered the alert in the current execution to a file (Progr
am/Script path=CMD echo "export").
Custom Scripting Alert Advanced options:
Export Data it is possible to export all the records which triggered the alert in the current execution to a file (it is also
possible to export only selected fields under the Custom type) from one of the available types.
REST API Call it is possible to open a URL (POST/GET/PUT/DELETE) call and send information which was detected in the monitor
execution.
NOTE: XpoLog can add additional information to the alerts from the logs and monitors which are executed such as log name, monitor name, log
column content, etc.It is also possible to add selected log fields to monitor alerts by placing the following place holders (case sensitive):
[SEARCH_QUERY] = By default, the search query used in the search monitor is presented in the alert''s subject. Occasionally, the search
query may be long so it is possible to include this placeholder in the email body which will be replaced upon execution with the query.
[END_OF_SUBJECT] = Used in the end of the message subject in case there is a need to exclude the search query from the subject.
[COLUMN_NAME] = the name of the column which its content will be included
[MONITOR_ID] = the unique id of the monitor
[MONITOR_NAME] = the name of the monitor
[MONITOR_STATUS] = the monitor status : 1 = failure , 0 = success
[LOG_NAME] = the log name that the included event is originated from
[LOG_ID] = the log name that the included event is originated from
[HOST_NAME] = the host name that the included event is originated from
[APPS_ID] = the application(s) name(s) that the included event is originated from
[FOLDER_NAME] = the parent folder name that the included event is originated from
Defining a Search Monitor
XpoLog search monitor runs automatically by the system at scheduled intervals and execute a search query as its monitoring rule. The search
monitor can be defined directly from the search console as well.
The following is a step by step flow to add a search monitor with alerts:
1. From The Monitors console (Manager > Log Actions > Monitors) - select Add Search Monitor.
2. Name the Monitor, and add the search query (simple or complex) you wish the monitor to execute.
3. Alerts - Add new Alert. If this is the first time XpoLog is configured to send alerts then you will be asked to enter details that XpoLog can use to
send the requested alert. Create the alert and save it.
4. Schedule - configure the frequency that you wish for this monitor - based on the configured frequency the monitor will scan the log.
Never will turn off the scheduler and will not execute the monitor
Daily will run every day based on time interval (Repeat Every) or at a specific hour (Daily At)
Weekly - will run on the specified day(s) based on time interval (Repeat Every) or at a specific hour (Daily At)
Monthly - will run on the specified month(s) on a given day based on time interval (Repeat Every) or at a specific hour (Daily At)
5. Save it. It will run automatically based on the frequency you configured and it is also possible to execute a monitor manually if needed by
selecting the monitor and click the execute button.
Note: On each execution, the monitor scans only new records and not the entire log.
Advanced section:
Scan log from last scan point - determines whether the monitor will scan only new records in the log on each execution or the entire log
either way. By default this option is selected.
Failure - determines the fail criteria of a monitor. By default if a single record was found matched to the configured rule, it will be
considered as a failure and the alerts will be triggered.
Once failed, execute failure actions only after - after a failure, alerts will be sent again only after a specified number of additional failure
without a success between.
Once failed, execute failure actions for - by default the monitor executes the alerts on the latest record that was matched per each
execution. This is the recommended option - the last event only. None of the events - no alerts will be sent, the first event only - a single
alert on the first record that was matched per each execution, each event - the alerts will be triggered on each log record that was
matched per each execution (not recommended since the number of records that may be found matched is not limited and the alert will
be sent per each one).
In case each event is selected, it is recommended to limit the total number of alerts that may be sent per each execution (Maximum
number of alerts to send).
Positive Alerts - execute a positive alert as an indication that a specified time has passed since last failure.
Defining a Log Monitor
XpoLog log monitor runs automatically by the system at scheduled intervals on a selected log and filter rule(s).
The following is a step by step flow to add a monitor with alerts on a log:
1. From The Monitors console (Manager > Log Actions > Monitors) - select Add Log Monitor.
2. Name the Monitor, and select the log that you wish to monitor from the existing logs in XpoLog, or define a new log.
3. Rules - select the rule(s) you wish to monitor from the existing rules on the selected log or define new rules (rules can be also regular
expressions).
4. Alerts - Add new Alert. If this is the first time XpoLog is configured to send alerts then you will be asked to enter details that XpoLog can use to
send the requested alert. Create the alert and save it.
5. Schedule - configure the frequency that you wish for this monitor - based on the configured frequency the monitor will scan the log.
6. Save it. It will run automatically based on the frequency you configured and it is also possible to execute a monitor manually if needed by
selecting the monitor and click the execute button.
Note:
On each execution, the monitor scans only new records and not the entire log.
It is also possible to configure the alerts to include the entire result or selected information from the matched
log events:
Under the Advanced Section of the email alert you can attach data:
Append event to end of email body - matched log events will be included in the email body.
Attach matched events as a compressed Tab Delimited / CSV / XML file.
It is possible to add selected log fields to monitor alerts by placing the following place holders:
[COLUMN_NAME] = the name of the column which its content will be included
[MONITOR_ID] = the unique id of the monitor
[MONITOR_NAME] = the name of the monitor
[MONITOR_STATUS] = the monitor status : 1 = failure , 0 = success
[LOG_NAME] = the log name that the included event is originated from
[LOG_ID] = the log name that the included event is originated from
[MERGE_SOURCE_NAME] = the log name which triggers the alert will be included (relevant for
merged logs)
Advanced section:
Scan log from last scan point - determines whether the monitor will scan only new records in the log on each execution or the entire log
either way. By default this option is selected.
Failure - determines the fail criteria of a monitor. By default if a single record was found matched to the
configured rule, it will be considered as a failure and the alerts will be triggered.
Once failed, execute failure actions only after - after a failure, alerts will be sent again only after a specifiednumber of additional failure without a success between.
Once failed, execute failure actions for - by default the monitor executes the alerts on the latest record that
was matched per each execution. This is the recommended option - the last event only. None of the events -
no alerts will be sent, the first event only - a single alert on the first record that was matched per each
execution, each event - the alerts will be triggered on each log record that was matched per each execution
(not recommended since the number of records that may be found matched is not limited and the alert will be
sent per each one).
In case each event is selected, it is recommended to limit the total number of alerts that may be sent per each
execution (Maximum number of alerts to send).
Positive Alerts - execute a positive alert as an indication that a specified time has passed since last failure.
Defining Monitors Group
XpoLog monitor group is an entity containing multiple monitors which can be executed as a group.
The following is a step by step flow to add a monitor group:
1. From The Monitors console (Manager > Log Actions > Monitors) - select Add New Group.
2. Name the Monitor, and select the monitors that you wish to include as part of this group.
3. Alerts - Add new Alert. If this is the first time XpoLog is configured to send alerts then you will be asked to enter details that XpoLog can use to
send the requested alert. Create the alert and save it.
4. Schedule - configure the frequency that you wish for this monitor - based on the configured frequency the monitor will scan the log.
5. Save it. It will run automatically based on the frequency you configured and it is also possible to execute a monitor manually if needed by
selecting the monitor and click the execute button.
Advanced section:
Failure - determines the fail criteria of a monitor. By default if a single monitor from the group members will fail, it will be considered as a
group failure and the alerts will be triggered.
Once failed, execute failure actions only after - after a failure, alerts will be sent again only after a specified number of additional failure
without a success between.
Once failed, execute failure actions only after a specified number of monitors failure.
Positive Alerts - execute a positive alert as an indication that a specified time has passed since last failure.
Management Console for Cloud Accounts
Address Book is used for adding accounts for connecting to sources of logs. Logs or log directories from these sources can then be added to
XpoLog. Some of the account types that can be added are dedicated to one source, such as SSH, which is connected to one machine that can
have many logs. However, there are also three cloud accounts that can be added – Google App Engine, Amazon Web Services, and Hadoop.
These accounts, which are encased in a cloud, manage many applications that have many logs. For these cloud accounts that can access large
quantiites of data in many logs, XpoLog provides separate Console Management for integration to cloud (big data). Based on these cloud
accounts, data can be added to XpoLog from the clouds accessed through these accounts.
To integrate to the cloud:
1. In XpoLog Manager, in the menu, select Administration > Cloud.
The Cloud and Big Data Configuration Management console opens for managing the cloud accounts.
AppTags
The implementation of AppTag is optional in XpoLog Center, and is used mainly for data enrichment.
An AppTag is a logical tagging of different elements that exist in XpoLog Center. The AppTag definition includes a set of static parameters that
describe the application, and a set of the members that compose the application, such as folders, logs, accounts, dashboards, or anything else
that is configured in XpoLog.
The AppTags console enables administrators to enrich the data with logical groupings by defining the AppTags that participate in the log. These
AppTags can include data from multiple logs or servers. The AppTags console presents all the AppTags defined in XpoLog and
enables administrators to create new ones, and modify and delete existing AppTags.
The implementation of AppTags has the following advantages:
Search can be run at the AppTag level. For example: error in AppTag.
In Analytics, can see analysis according to AppTags (I.E. Applications).
Security can be managed at the AppTag/application level. Users who enter the application under an AppTag context can only view the
members under that AppTag definition.Adding an AppTag
To add an AppTag:
1. In the Administration menu, select AppTags.
The AppTags console is displayed. AppTags defined in XpoLog Manager are listed.
2. In the AppTags console, click the New AppTag button.
The Add AppTag console is displayed.
3. In Name, type a name for the new AppTag.
4. In Description, type a description of the new AppTag (optional).
5. Available tabs:
a. Members tab, and select the members of the application (see Mapping Sources by AppTag).
b. Security tab (displayed only if security is active), set the view and edit permissions of users/groups related to the AppTag.
c. Localization tab (displayed only if security is active and if the Apply AppTags time zone is selected under Advanced General
Settings), set the set the specific time of the AppTag.
6. Click the Save button.
The new AppTag is saved in XpoLog Center.
Mapping Sources by AppTag
When adding an AppTag to the system, you should associate with the AppTag all members, i.e. system elements connected to the AppTag. This
association enables managing a minienvironment in the system. Only those selected members will be available to users who are authorized to
use the AppTag.
To associate members with an AppTag:
1. In the Add AppTag console, click the Members tab.
2. Click a member type, and then select the checkboxes of the members to associate with the AppTag.
3. Repeat step 2 for all member types, as required.
4. Click Save.
The members are associated with the AppTag.
Editing an AppTag
You can modify the name, description, and/or members of an AppTag.
To modify an AppTag:
1. In the Administration menu, select AppTags.
The AppTags console is displayed. AppTags defined in XpoLog Manager are listed.
2. Select an AppTag, and then click the Edit button.
The Edit AppTag console is displayed.
3. Modify the Name, Description, or the members of the AppTag (see Adding an AppTag).
4. Click the Save button.
The AppTag is updated in XpoLog Center.
Removing an AppTag
You can remove from XpoLog an AppTag and its members, or an AppTag only.
To remove an AppTag from XpoLog:
1. In the Administration menu, select AppTags.
The AppTags console is displayed. AppTags defined in XpoLog Manager are listed.
2. Select an AppTag, and then click the Delete button.
The Delete confirmation box is displayed.
3. Select one of the following: AppTag and members or AppTag only.
Depending on your selection, the selected AppTag and its members, or the AppTag only are deleted from XpoLog.
Address Book
Address Book displays a listing of all the connectivity accounts that are available in XpoLog for connecting to remote data sources. These
accounts contain information regarding remote protocol connections, databases, and emails. This information includes the account''s name and
description, and specific information pertaining to the specific account type, such as host address, port, username, and password in the case of
an SSH account, or the email address in the case of an email account. These accounts are required for adding a log from a remote source, such
as a database log, as well as for a variety of other operations, such as exporting logs and modules and defining SQL queries on previously
defined logs.
In Address Book, you can create and manage the connectivity accounts, including modifying, enabling, disabling, and removing account(s).
Creating an Account
XpoLog enables creating the following types of accounts: Amazon Web Services, Database, Email, Google App Engine, Hadoop, JMS, Remo
te XpoLog, SNMP, SSH, and Windows Authentication. The following sections describe how to create each of these accounts.To create an account:
1. In the Log Manager toolbar, click Tools > Address Book.
A list of all accounts is displayed. Buttons are provided for creating a new account, and enabling, disabling, editing, deleting, or verifying
an account.
2. In Address Book, click New Account.
A list of available account types is displayed.
3. Select the option button of the type of account that you want to create, and then click the Continue button.
The configuration page for the selected account type is displayed.
4. Configure the account. See the sections below for the configuration procedures of the various account types.
5. Click Save.
The new account is saved in Address Book.
Note: After saving the account In Address Book, it is recommended to click the Verify button to ensure that XpoLog can establish a valid
connection to the newly created account.
Configuring an Amazon Web Services Account
An Amazon Web Services (AWS) account enables you to access data stored in files and folders in the Amazon server.
The following are required to use any service on Amazon Web Services:
Access Key ID – the username; an alphanumeric text string that uniquely identifies the user who owns the account. No two accounts can
have the same AWS Access Key ID.
Secret Access Key – plays the role of a password. It is secret, as it should be known to the owner only.
The following procedure describes how to configure the parameters of an Amazon Web Services account.
To configure an Amazon Web Services account:
1. In Name, type the name of the new account.
2. In Description, type a short description of the new account. Optional.
3. In Access Key ID, type the identification name for signing into AWS.
4. In Secret Access Key, type the password for signing into AWS.
5. In Default Region, select the geographical area where you want to access data.
Configuring a Database Account
The following procedure describes how to configure the parameters of a database account.
Note: Not all fields apply to all database types.
To configure a database account:
1. In the Database Types page,
Select a database type and click the Create Account button
OR
Double-click the database type.
The configuration page for the selected database type is displayed. Driver Name is filled in automatically.
2. In Name, type the name of the new database account. The default is the database type.
3. In Description, type a short description of the new account. Optional.
4. In Host Address, type the address of the machine on which the database is installed. Default: localhost.
5. In Port, type the port number on which the database accepts connections.
6. In Database Name, type the name of the database to connect to.
7. In Username and Password, type the username and password (optional) required for connecting to the database.
8. In Connection string params, type the names of the parameters that should be passed upon connection. Optional.
Creating a Data Source
If you choose to add a Data Source (and not define a database account), you should specify the following configuration details: Name: the name
of the data source. Description (optional): the description of the data source. JNDI Name: the JNDI name of the data source. Environment
Properties (optional) Database Type: select the type of database the data source will work against (choose ‘other’ for an unknown database). ·
Click the ‘Save’ button to save the new account. · Verify the account the ensure XpoLog can establish a valid connection.
Configuring an Email Account
The following procedure describes how to configure the parameters of an Email account.
To configure an Email account:
1. In Name, type the name of the new account.
2. In Description, type a short description of the new account. Optional.
3. In Email Address, type one or multiple email addresses used by the account. Separate multiple email addresses with a semicolon.
Configuring a Google App Engine AccountThe following procedure describes how to configure the parameters of a Google App Engine account.
To configure a Google App Engine account:
1. In Name, type the name of the new account.
2. In Description, type a short description of the new account. Optional.
3. In Email and Password, type the email address and password required to sign on to the Google App Engine account.
Configuring a Hadoop Account
The following procedure describes how to configure the parameters of a Hadoop account.
Note: Only connections to Hadoop version 0.20.203.0 and later are supported.
To configure a Hadoop account:
1. In Name, type the name of the new account.
2. In Description, type a short description of the new account. Optional.
3. In Host Address, type the the host name / IP address of the Hadoop environment.
4. In Port, type the port number on which the remote host accepts Hadoop connections.
Configuring a JMS Account
The following procedure describes how to configure the parameters of a JMS account.
To configure a JMS account:
1. In Name, type the name of the new account.
2. In Description, type a short description of the new account. Optional.
3. In JNDI Context, type the the full JNDI context.
4. In JNDI Provider URL, type the URL to be used to access the JNDI provider.
5. In Username and Password, type the username and password required for connecing to the JNDI provider. Optional.
6. In JMS Topic Factory, type the JNDI name of the JMS topic factory.
7. In JMS Queue Factory, type the JNDI name of the JMS queue factory.
Configuring a Remote XpoLog Account
The following procedure describes how to configure a Remote XpoLog account for communicating to a remote instance of XpoLog over HTTP/S.
It is mandatory that the respective HTTP/S ports will be opened to enable the communication.
To configure a Remote XpoLog account:
1. In Name, type the name of the new account.
2. In Description, type a short description of the new account. Optional.
3. In Host Address, type the host name / IP address of the remote XpoLog.
4. In Protocol, select whether the remote XpoLog listens on HTTP or HTTPS.
5. In URL Context, type the context under which the remote XpoLog is deployed (optional). Default: logeye.
6. In Port, type the number of the port on which the remote XpoLog listens. Default: 30303 for HTTP; 30443 for HTTPS
7. In Username and Password, type the username and password required to log in to the remote XpoLog, in case security is activated on
the remote XpoLog. Optional.
8. Check the enabled check-box if you wish this account to be enabled or un-check it to disable it. Disabled accounts will not allow the
communication to the remote XpoLog instance.
9. Account Type:
a. Proxy - use this type if the remote XpoLog instance is processing the logs remotely and the current XpoLog instance should only
send queries to it and receive the results. In this mode the data itself will not be collected to the current XpoLog instance but will
be available for searches and view.
b. Agent - use this type if the remote XpoLog instance is used as an agent, I.E. the remote XpoLog instance is used to allow access
to the remote environment and all the logs that will be added from the remote XpoLog instance will be collected by the current
XpoLog instance. Usually, when this mode is selected the remote XpoLog instance should also be set to ''Agent Mode'' to reduce
its footprint to minimum on the remote server.
Advanced Remote XpoLog Account Settings
Advanced Settings enable you to configure configuration synchronization. When activated, the configuration synchronization makes sure that for
each log that exists in the remote instance, a remote log will be created in the local instance. Note that deleting a log in the remote instance will
not delete the remote log in the local instance.
To configure advanced settings:
Click Advanced Settings.
The Advanced Settings section opens, with the Synchronize Configuration subsection.
Network Settings
In Network Settings, you can configure the following:
Compress Traffic – You can determine whether the traffic against the host will be compressed or not. By default, the traffic is
compressed.To configure network settings:
1. Click Network Settings.
The Network Settings section opens.
2. Select the Compress Traffic checkbox.
Synchronize Configuration
In Synchronize Configuration, you can configure the following:
Enable remote configuration synchronization – You can enable the remote synchronization in order to create a remote log in the local
instance for each new log that is created in the remote instance.
Parent Folder – Specify the parent folder of new logs.
Remote Time Zone - Specify the time zone of new logs.
Collection Policy - Specify the collection policy of new logs.
To configure synchronize configuration:
1. Click Synchronize Configuration.
The Synchronize Configuration section opens.
2. Select the Enable remote configuration synchronization checkbox.
3. Specify the parent folder of new logs:
a. Select the Use default parent folder option in order to place the new logs under a folder named after the account.
b. Select the Use a specific parent folder option in order to select a specific parent folder for the new logs.
4. Specify the time zone of new logs.
5. Specify the collection policy of new logs.
Configuring an SNMP Account
The following procedure describes how to configure an SNMP account.
To configure an SNMP account:
1. In Name, type the name of the new account.
2. In Description, type a short description of the new account. Optional.
3. In Host, type the host name/IP address of the remote host.
4. In Port, type the port number on which the remote host accepts SNMP traps.
5. In Version, select the version of SNMP to be used.
6. In Protocol, select the protocol to be used.
7. Select the Use Proxy checkbox to use the proxy; otherwise, leave cleared. Optional.
Configuring an SSH Account
The following procedure describes how to configure an SSH account.
SSH accounts can be enabled or disabled. If disabled, all related activity using the SSH account is suspended in XpoLog.
To configure an SSH account:.
1. In Name, type the name of the new account.
2. In Description, type a short description of the new account. Optional.
3. In Host Address, type the the host name/IP address of the remote host.
4. In Port, type the port number on which the remote host accepts SSH connections. Default: 22
5. In Username and Password, type the username and password required for connecting to the remote host.
6. Select the Enabled checkbox to enable the account; otherwise, to disable, leave cleared. Optional.
7. Configure advanced settings, as required. See Advanced SSH Account Settings section below.
Advanced SSH Account Settings
Advanced Settings enable you to configure advanced general settings and customize the account policy.
To configure advanced settings:
Click Advanced Settings.
The Advanced Settings section opens, with the General Settings and Account Policy subsections.
General Settings
In General Settings, you can configure the following:
Private Key Path – In cases where a private key is used to authenticate with the specified host (when the private key path is configured -
the password for connecting to the remote host is optional).
SCP – The default file transfer protocol is SFTP (SSH File Transfer Protocol). However, if the remote host does not support SFTP for file
transfer, you can use SCP (Secure Copy Protocol).
Administrator Email Address – The email address of the system administrator to be notified when an SSH policy is breached.
To configure general settings:
1. Click General Settings.
The General Settings section opens.
2.2. In Private Key Path, if private key authentication is used, type the path to the private key that XpoLog can use.
3. Select the SCP checkbox if the remote host does not support SFTP for file transfer.
4. In Administrator Email Address, type the email address of the system administrator to be notified upon connection failure.
Account Policy
In Account Policy, you can customize a specific account policy, instead of using the default policy for the SSH account, as configured in the Conn
ection Policy tab of the Settings > General page.
To customize the account policy:
1. Click Account Policy.
The Account Policy section opens.
2. Select the Define a Custom Policy option, and configure the custom policy''s settings. For a full explanation of the settings, see Settings
> General, the Connection Policy tab.
Configuring a Windows Authentication Account
The following procedure describes how to configure the parameters of a Windows Authentication account.
To configure a Windows Authentication account:
1. In Name, type the name of the new account.
2. In Description, type a short description of the new account. Optional.
3. In Domain, type the name of the domain in which the user is defined.
4. In Username and Password, type the username and password (optional) for connecting to the Windows Authentication account.
Configuring an Amazon Web Services (AWS) S3 Bucket Account
The following procedure describes how to configure the parameters of an AWS S3 Bucket account.
To configure a AWS S3 Bucket account:
1. In Name, type the name of the new account.
2. In Description, type a short description of the new account. Optional.
3. In Access Key, type the access key of the AWS S3.
4. In Secret Key and Password, type the secret key of the AWS S3.
Disclaimer: Xpolog stores all passwords using industry standard algorithms.
Editing an Account
You can modify the settings of any account.
To edit an account:
1. In the Address Book, select an account from the list of accounts, and click the Edit button.
The settings of the account are displayed.
2. Modify the parameters of the account, as required. See a detailed explanation of the parameters for each type of account in Address
Book.
3. Click Save.
The account settings are updated in the system.
Enabling/Disabling an Account
You can enable a disabled SSH/Remote XpoLog account, or disable an enabled SSH/Remote XpoLog account.
To enable a disabled account:
In the Address Book, select an account from the list of accounts, and click the Enable button.
The account becomes enabled.
To disable an enabled account:
In the Address Book, select an account from the list of accounts, and click the Disable button.
The account becomes disabled.
Removing an Account
You can remove from the XpoLog address book an account that you no longer require.
To remove an account:1. In Address Book, select an account, and then click the Delete button.
A Delete confirmation box is displayed.
2. Click OK.
The account is deleted from the Address Book.
Verifying an Account
You can verify any account in the system.
To verify an account:
1. In the Address Book, select an account from the list of accounts, and click the Verify button.
The Account Verification dialog box opens, and verification begins.
2. When verification is complete, click the OK button.
Templates
A Template is a complete configuration set for a certain log type; its definition includes the log data pattern, filters, and metadata. Usage of
templates accelerates and automates the configuration process of logs added to or updated in XpoLog.
Administrators can perform the following actions related to templates:
View templates (see Viewing XpoLog Templates)
Save a log as a template (see Saving a Log as a Template)
Create a log based on a template (see Creating a Log Based On a Template)
Apply a template on multiple logs (see Applying a Template on Multiple Logs)
Import a template from another XpoLog system (see Importing a Template)
Export a template to another XpoLog system (see Exporting a Template)
Delete a template from the system (see Deleting a Template)
Viewing XpoLog Templates
You can view a listing of all templates defined in XpoLog.
To view the XpoLog templates:
In XpoLog Manager, select the Configuration > Templates menu item.
The Templates console opens, displaying an alphabetically sorted list of templates. The first letters of the template names in the list are
highlighted in the Filtering Area above the list.
Quickly Navigating to a Template
The Filtering Area above the lists of templates enables quick navigation to a template beginning with a specfic letter. The letters which begin
the template names in the list are highlighted in the Filtering Area. This is a convenient feature in systems that have many templates.
To quickly navigate to a template:
Click a highlighted letter above one of the template groups.
The template names beginning with that letter are displayed.
Saving a Log as a Template
You can save any log in the system that is opened in the Log Viewer, as a template. The file structure, patterns, and customization (if it exists) of
the log are all saved in the template. This template is then available for applying on other logs in XpoLog.
To save a log as a template:
1. In XpoLog Manager left pane, in the Folders and Logs tree, select any log in the system.
The log records are displayed in the Log Viewer.
2. Select the Configuration > Save as Template menu item.
The Save Template page is displayed.
Note: A log must be selected before selecting Configuration > Save as Template. Otherwise, an error occurs.
3. Select one of the following:
a. Overwrite an existing template:
i. Select an existing template from the templates'' list.
Note: the template will be overwritten by the configuration of the log that is currently being saved as the template. The
template''s Name and Description will remain as before.
b. Create a new template:
i. Type a Name for the template
ii. Type a Description for the template
iii. Type a Unique Identifier for the template - it is mainly used when XpoLog receive data with a ''Type'' - then it applied the
template''s configuration automatically on the received data if the template''s Unique Identifier matches the received type
(in XpoLog Listeners for example).
4. Click Save.
The Templates page opens, displaying the new template name on the templates list.
Templates Advanced Configuration:
1. Include template while running wizard
Determines whether to match this template to detected logs during the add logs directory process
2. Use template''s validation while running wizard
If checked, the data from the log file will be matched to the template''s pattern in addition to matching the log file name
If not checked, only log file name will be matched to the template
3. Ignore file''s name expression
If checked, the log file name in the template will be ignored and the template will be matched to the log file by either searching for a text in
the header of the file, or applying an expression that will "replace" the log file''s name
Creating a Log Based On a Template
You can create a log from any template defined in XpoLog.
To create a log based on a template:
1. Open the list of templates in the system (see Viewing XpoLog Templates).
2. In the Templates list, click the Create Log link to the right of the template from which you want to create a log.
The Edit Log page opens, with all the details of the template, besides for its name.
3. In Log Name, type a name for the new log.
4. If necessary, click Next to edit the configuration of the newly created log.
5. Click Save.
The log configuration is saved, and the log name appears under the Folders and Logs tree in the left navigation pane of the Log Viewer.
Applying a Template on Multiple Logs
You can update the configuration of multiple logs that exist in XpoLog, based on a single template. For example, you can add a column to 10 logs
of the same type residing in your XpoLog system, by creating or updating a single template, and applying it on all the logs. For logs having log
names with common characters, this can be done by specifying in the log name, the common characters of the logs'' log names, and an
asterisk as placeholder for uncommon characters in the log names. For example, to apply a template on logs with log names beginning with acce
ss, you can enter the log name access*. Alternately, and especially when the log names do not contain common characters, you can select the
checkbox of each log name on which to apply the template.
To apply a template on multiple logs:
1. Open the list of templates in the system (see Viewing XpoLog Templates).
2. Click apply template on logs near the template that you want to apply on multiple logs.
The Specify Template Target Logs page appears, for specifying the logs on which to apply the template.
3. Do one of the following:
Select the Specify log name option, and in Log Name, type the name of the log or a name that can represent a group of logs (for
example, access*).
Select the Select folders and logs option, open the Folders and Logs tree to show the relevant logs, and select the checkboxes of the
logs on which to apply the selected template.
4. Click the Apply button.
The configuration of the selected logs is updated according to the selected template.
Importing a Template
Templates can be imported from another XpoLog system. The names of imported templates appear in the list of templates on the Templates
page.
To import a template:
1. In XpoLog Manager, select the Configuration > Import Templates menu item.
The Import Templates page opens.
2. In Path, browse to the template archive file, and select the template ‘.zip’ file or type the Network URL to import, and then click
the next link.
The template is imported into XpoLog, and the Templates page opens, displaying its name in the templates list.
Exporting a Template
Templates can be exported as a Zip file to a template archive file, from where they can be imported to another XpoLog system.
To export a template:
1. In XpoLog Manager, select the Configuration > Export Templates menu item.
The Export Templates page opens.
2. Select the checkboxes of the templates to export, and then click the export link.
3. Download and save the ‘.zip’ file in a template archive file.
Deleting a Template
You can delete from XpoLog a template, provided that logs in the system are not based on this template.To delete a template from XpoLog:
1. Open the list of templates in the system (see Viewing XpoLog Templates).
2. Click Delete near the template that you want to delete.
Operation Verification page requests confirmation of the template removal.
3. Click OK.
The template is removed from XpoLog and no longer appears on the list.
Tasks
Tasks are operations that can be activated by XpoLog in a scheduled or manual manner. The Tasks console presents all the tasks that are
available in XpoLog, and enables creating, duplicating, modifying, executing, and removing tasks.
To open the Tasks console:
In the XpoLog Manager menu, select Tools > Tasks.
The Tasks console opens.
Adding a Task
The following types of tasks can be added to XpoLog:
Add Logs Directory – see Adding an Add Logs Directory Task
Batch Execution – see Adding a Batch Execution Task
Collection to Database – see Adding a Collection to Database Task
Email – see Adding an Email Task
JMS Message – see Adding a JMS Message Task
SNMP Trap – see Adding an SNMP Trap Task
SSH Execution – see Adding an SSH Execution Task
URL – see Adding a URL Task
Adding a Batch Execution Task
A Batch Execution task executes a script or program according to the defined configuration – name of the program or script, arguments,
environment variables, and working directory.
To add a Batch Execution task:
1. Open the Tasks console (see Tasks), and click the New Task button
OR
In the XpoLog Manager homepage, under More Actions, click Add Task.
2. In the page that opens, select the Batch Execution option, and then click Continue.
The Batch Execution Task page opens.
3. In Name, type the name of the new task.
4. In Description, type a description of the new task; optional.
5. In Program/Script Path, type the name of the program/script to be executed by the task.
6. In Arguments, type the arguments needed for the program/script to run, separated by spaces.
7. In Environment Variables, type the environment variables needed for the program/script to run; optional.
8. In Working Directory, type the name of the directory from which the program/script should be run; optional.
9. In Output Target File, type the the path to the file where output of the program/script execution is to be written; optional.
10. Select the Add Optional Params checkbox to include in the output file the date, account name, host name, and username.
11. To automate task execution, open the Schedule tab, and configure the scheduler as described in Scheduling a Task.
12. Click Save.
The Batch Exection task is saved.
Adding a Collection to Database Task
A Collection to Database task exports data from an XpoLog log into a specified database.
To add a Collection to Database task:
1. Open the Tasks console (see Tasks), and click the New Task button
OR
In the XpoLog Manager homepage, under More Actions, click Add Task.
2. In the page that opens, select the Collection to Database option, and then click Continue.
The Collection to Database Task page opens.
3. In Name, type the name of the new task.
4. In Description, type a description of the new task; optional.
5. In Collected Log, click Select log, and in the Select Log to Collect window that opens, select a log to export to a database.
6. In Data Filter Query, type a query to filter out events that do not satisfy the above query.
7. In Connection Details, select the Database account to use, or click New to define a new Database account (see Creating an Account).
8. Enter the table name into which data is to be exported. Under ''database advanced settings'', you can see the CREATE TABLE and INSE
RT statements that will be used.
9. To automate task execution, open the Schedule tab, and configure the scheduler as described in Scheduling a Task.
10. Click Save.
The Collection to Database task is saved.Adding a JMS Message Task
A JMS message task sends a JMS message using a JMS account. You can automate task execution by configuring a scheduler on it. XpoLog will
run the task based on the scheduler configuration.
To add a JMS Message task:
1. Open the Tasks console (see Tasks), and click the New Task button
OR
In the XpoLog Manager homepage, under More Actions, click Add Task.
2. In the page that opens, select the JMS Message option, and then click Continue.
The JMS Message Task page opens.
3. In Name, type the name of the new task.
4. In Description, type a description of the new task; optional.
5. In Connection Details, select the JMS account to use, or click New to define a new JMS account (see Creating an Account).
6. In JMS message, type the JMS message to send.
7. In JMS Topic name, type the name of the JMS topic that the message should be written to; optional if JMS Queue name is specified.
8. In JMS Queue name, type the name of the JMS queue that the message should be written to; optional if JMS Topic name is specified.
9. To automate task execution, open the Schedule tab, and configure the scheduler as described in Scheduling a Task.
10. Click Save.
The task is saved and appears in the Tasks console in the Tasks list.
Adding a LogSync Task
The LogSync task enables synchronizing a single or multiple logs directories from remote XpoLog to another XpoLog, and automating their
execution using a scheduler. This feature should be used when you want to create an image of a directory or multiple directories content (logs
and/or binaries) on another XpoLog.
Configuration of a LogSync task requires specifying a connectivity account to the remote XpoLog machine created , see Creating an Account -
Remote XpoLog.
Configuration of a LogSync task for synchronizing multiple single log directory or multiple logs directories to XpoLog requires specifying the path
to an XML-based LogSyc Configuration file, which contains the configuration for the synchronization from multiple servers and/or directories to
XpoLog. For a detailed explanation on creating the XML-Based LogSyncConfiguration file, see below.
To add a LogSync task:
1. Open the Tasks console (see Tasks), and click the New Task button
OR
In the XpoLog Manager homepage, under More Actions, click Add Task.
2. In the page that opens, select the LogSync option, and then click Continue.
The LogSync Task page opens.
3. In Name, type the name of the new task.
4. In Description, type a description of the new task; optional.
5. Under Details tab:
a. Use a synchronization configuration file , and in Configuration file path, type the full path name to the LogSync
configuration file or browse to it.
b. Create Configuration, while synchronization takes place, XpoLog can also create configuration on all
the logs which were synchronized under the Folders and Logs panel so all logs will be available for view
and search in the XpoLog console. Check the ''create configuration'' check-box if you wish to automatically
add logs to XpoLog, and select/create a parent folder under which all synchronized logs from this task will be placed.
6. To automate task execution, open the Schedule tab, and configure the scheduler as described in Scheduling a Task.
7. Click Save.
The Add LogSync task is saved.
Note:
When right clicking the LogSync task the following options are available:
1. Execute - Run the task.
2. Edit - Edit the task.
3. Duplicate - Duplicate the task.
4. Delete - Delete the task.
5. Force Log Sync (All) - Force the LogSync task to re-sync all files that were deleted from the repository but still exist on the remote
sources related to this task. The re-sync will take place during the next task execution.
6. Force Log Sync (Checksum mismatch) - Force the LogSync task to re-sync only files that a checksum mismatch was found between
repository to source. The re-sync of these files will take place during the next task execution.
7. Reconstruct Configuration (displayed only if ''Create Configuration'' is defined in the task) - Force the LogSync task to re-create missing
Folders and Logs configuration, based on the structure of the repository. The reconstruction will take place during the next task
execution.
Execution:
In case you are running XpoLog cluster with more than one processor, an option will be presented to determine which of the processors
is assigned on the execution of the LogSync task.LogSync XML Based Configuration
The following table describes the general structure of SyncLogsConfiguration.
Tag Path Mandatory/Optional Description
SyncLogsConfiguration Mandatory
SyncLogsConfiguration/SyncLogsRepository Mandatory The repository tag
SyncLogsConfiguration/SyncLogsNode Mandatory A single log synchronizatoin node configuration
SyncLogsConfiguration/SyncLogsNode/Rem Mandatory Contains the Remote XpoLog Account details to which with the
ote directories will be synched.
SyncLogsConfiguration/SyncLogsNode/Rem Mandatory Mandatory as XpoLog should connect to a Remote XpoLog server
ote/Account (See: Creating an Account)
SyncLogsConfiguration/SyncLogsNode/Sync Mandatory Configure the remote path to synch with the local XpoLog
LogsDirectory
XML Reference
SyncLogsRepository Parameters
Parameter Mandatory/Optional Description Values
repositoryDirectory Mandatory The name of the root folder that will be synced with he content of the remote sync log directories. String
timeToKeep Optional Time to keep the synched data on the repository String
Example
CheckSum Parameters
Parameter Mandatory/Optional Description Values
algorithm Mandatory The algorithm to use to calculate the checksum MD5/SHA-1
enabled Optional Determines whether to activate checksum true/false
interval Mandatory The frequency that checksum will be calculated m=minutes
h=hours
d=days retryAttempts Mandatory The number of times that XpoLog will try to re-synchronize a file in case of a number
checksum validation failure before alerting
mailRecipients Optional The list of recipients that will be alerted in case of a checksum validation failure semicolon separated list of
email addresses
* The Checksum definition can be placed inside a specific SyncLogNode tag to be applied on that specific node, or inside
the SyncLogsConfiguration tag to be applied globally on all nodes
Example:
SyncLogNode Parameters
Parameter Mandatory/Optional Description Values
repositoryDirectory Mandatory The name of the folder within the repository to which remote directory from this node will be String -
synced into path
key Optional The SyncLogNode unique key
Example
Account Parameters
Parameter Mandatory/Optional Description Values
classKey Mandatory The account class to use String
name Mandatory The Remote XpoLog account name/ID to use String (case sensitive)
Remote Account Example
SyncLogsDirectory Parameters
Parameter Mandatory/Optional Description Values
syncDirectory Mandatory The directory on the remote XpoLog Node that will be synchronized into the String - path
SyncLogNode repository directory
repositoryDirectory Optional The name of the folder within the repository to which remote directory from this node String
will be synced into
directoriesToInclude Optional Define which directories to include in the log sync scan: String
Using wildcard: Apps* - include all directories whose name starts with Apps
Using a regular expression: regexp:\d\d\d\d-\d\d\-\d\d - include all directories whose
name is a date, for example 2013-11-26
directoriesToExclude Optional Define which directories to exclude from the log sync scan: String
Using wildcard: Apps* - exclude all directories whose name starts with Apps
Using a regular expression: regexp:\d\d\d\d-\d\d\-\d\d - exclude all directories whose
name is a date, for example 2013-11-26
filesToInclude Optional Define which files to include in the log sync scan: String
Using wildcard: *.log,*.txt - include all files whose name ends with .log or .txt
Using a regular expression: regexp:\w+\.log - include all files whose name is
constructed of word characters only and ends with .log, for example helloWorld.log
filesToExclude Optional Define which files to exclude in the log sync scan: String
Using wildcard: *.zip,*.gz - exclude all files whose name ends with .zip or .gz
Using a regular expression: regexp:\w+\.tar\.gz - exclude all files whose name is
constructed of word characters only and ends with .tar.gz, for example helloWorld.tar.g
zsubdirsScanLevel Optional Number of sub-folder depth to scan Number
timeInterval/timeInter Optional The task will synchronize only files with last updated time that is within the specified years, months,
valUnit time interval weeks, days, hours,
mins
Examples
Configuration Parameters on synchronized Logs
The configuration parameters are relevant only if the Create Configuration option is selected. XpoLog will scan the repository to add all relevant
logs.
It is possible to add tag either globally (above all tag) or individually by
placing it internally in a .
Using Proxy XpoLog for Log Synchronization
It is possible to use a remote XpoLog as a proxy to synchronize logs from remote agents which are managed by a remote XpoLog and not directly
by the current XpoLog that executes the task.
To do so add the account(s) of the remote XpoLog before the ''SyncLogsNode'' tags.
The following example will synchronize logs from the directory ''C:\logs\'' on the ''Agent 1'' machine, where
''Agent 1'' is not a direct agent of the current XpoLog, but an agent connected to ''Remote XpoLog Master1''
and ''Remote XpoLog Master2'':
It is also possible to define the inside an individual :
The following example will synchronize logs from the directory ''C:\logs\'' on the ''Agent 1'' machine, where
''Agent 1'' is not a direct agent of the current XpoLog, but an agent connected to ''Remote XpoLog Master1''
and ''Remote XpoLog Master2''. In addition, it will synchronize logs from the directory ''/root/xpolog45/log'' on
the ''Agent 2'' machine, where ''Agent 2'' is a direct agent of the current XpoLog (no proxy has been defined for
it):
Adding an Add Logs Directory Task
The Add Logs Directory task enables scanning a single or multiple logs directories to XpoLog, and automating their execution using a scheduler.
This feature should be used, as opposed to the Administration > Add Logs Directory menu item, when you want to add multiple logs directories
(as opposed to a single log directory), or to automate the execution of a single or multiple log directories (as opposed to manual execution).
Configuration of an Add Logs Directory task for adding a single logs directory to XpoLog is similar to the configuration of the Administration > Add
Logs Directory feature; it requires specifying the path to the logs directory. For a logs directory that resides on a remote machine (SSH or
Windows Network), it also requires specifying a connectivity account to the remote machine.
Configuration of an Add Logs Directory task for adding multiple logs directory to XpoLog requires specifying the path to an XML-based Scanner
Configuration file, which contains the configuration for the addition of multiple logs from multiple servers and/or directories to XpoLog. For a
detailed explanation on creating the XML-Based Scanner Configuration file, see Creating an XML-Based Scanner Configuration File.
To add an Add Logs Directory task:
1. Open the Tasks console (see Tasks), and click the New Task button
OR
In the XpoLog Manager homepage, under More Actions, click Add Task.
2. In the page that opens, select the Add Logs Directory option, and then click Continue.
The Add Logs Directory Task page opens.
3. In Name, type the name of the new task.
4. In Description, type a description of the new task; optional.
5. In Parent Folder, click select and select the folder under which the new folders and logs are to be added.
6. Under Configuration, if the tasks configuration is specified in an external file, select the Use a scanner configuration file option, and in
Configuration file path, either type the full path name to the scanner configuration file or browse to it, or click on Upload File to upload
the external file.
Otherwise, select the Scan a specific directory option, and proceed as described in the Scanning a Single Logs Directory section
below.
7. Optionally, configure Advanced Settings (see Configuring Advanced Settings).
8. To automate task execution, open the Schedule tab, and configure the scheduler as described in Scheduling a Task.
9. Click Save.
The Add Logs Directory task is saved.
Scanning a Single Logs Directory
The Scan a Specific Directory option is used for bringing in a single logs directory from any of the following locations:
Local – The logs directory is on the same machine as XpoLog Center.Windows Network – The logs directory is on a remote Windows machine.
Over SSH – The logs directory is on a remote UNIX machine (with SSH connecting protocol).
Adding a Windows Network or Over SSH logs directory requires connecting to their server using a connectivity account.
To scan a single logs directory:
1. In Location Type, select the location of the log directory to add: Local, Windows Network, or Over SSH.
2. For a Windows Network or Over SSH location, in Connection Details, select the authentication account required to connect to the server
where the selected directory resides, or click the new link to add an authentication account to the system (see Address Book).
3. In Directory Path, type the path to the directory that contains the log files
OR
Click Browse and in the System Files Browser (of the local, Windows Network, or Over SSH machine) that opens, expand the folders to
get to the desired directory, and then click Select to display the path to the logs directory in Directory Path.
4. In Collection Policy, select a predefined collection policy.
Creating an XML-Based Scanner Configuration File
You can create an XML file to build an environment for scanning many servers, and per server, scanning many directories. The path to this XML
file is placed in the Add Logs Directories Task, for adding multiple directories to XpoLog, and automating addition of directories.
DirectoryScanner XML General Structure
The following is the XML code of DirectoryScanner.
The following table describes the general structure of DirectoryScanner.
Tag Path Mandatory/Optional Description
DirectoryScanner Mandatory
DirectoryScanner/ScannerNode Mandatory The root folder that will be placed above its scanned
directories.
DirectoryScanner/Account Optional Mandatory if XpoLog should connect to a remote
server - Windows / UNIX (Creating an Account)
DirectoryScanner/ScanDirectories Mandatory
DirectoryScanner/ScanDirectories/ScanDirectory Mandatory Contains the scanPath
DirectoryScanner/ScanDirectories/ScanDirectory/ScanCo Optional
nfiguration
DirectoryScanner/ScanDirectories/ScanDirectory/ScanCo Optional
nfiguration/ScanFileFilter
DirectoryScanner/ScanDirectories/ScanDirectory/ScanCo Optional
nfiguration/ScanConfApplications XML Reference
ScannerNode Parameter
Parameter Mandatory/Optional Description Values
name Mandatory The name of the root folder that will be placed above its scanned directories. String
Leave the name empty to create all sub-directories under the parent folder with their original name from the
source server.
Example
Account Parameters
Parameter Mandatory/Optional Description Values
name Mandatory The account name String
useEncrypt Mandatory Indicates whether or not account password will be encrypted Boolean
isPublicKey Mandatory (SSH If authentication is done by private key, should be FALSE. Boolean
Only) If user/password are used, it should be TRUE, and a path should be specified under
privateKeyPath (see the following parameter).
privateKeyPath Optional (SSH Only) The path to the key, if authentication is done by private key
isSystemAccoun Indicates whether or not account is a system account Boolean
t
isScriptAPI Indicates whether or not account is Script API Boolean
isSSH Indicates whether or not account is SSH Boolean
isEditable Indicates whether or not account can be edited in the XpoLog Address Book Boolean
isCertificate Indicates whether or not the account uses a certificate Boolean
description Optional Description of the account
classKey Mandatory Windows: xpolog.eye.media.auth.win.WinAuthenticationAccount
SSH: xpolog.eye.media.telsh.TelnetAccount
certificateID Optional The ID of the certificate, if account uses a certificate (see isCertificate). String
UserName Mandatory The username that the account uses to connect String
isDefault Boolean
Port Mandatory (SSH The port that will be used to establish the connection to the remote data source Numeric
Only)
TYPE_SCP_SFTP Optional (SSH Only) Indicates if the SSH account will use SCP or SFTP (default) protocol String
Password Optional The password that the account uses to connect. Optional only if SSH account uses Public/Private String
key
NetAddress Mandatory The IP/hostname of the remote data source used in the account String
Note 1: If a remote data source is scanned then an account to that source should be specified (it can be verified after execution under XpoLog >
Tools > Address Book). In case an account for a specified machine already exists, XpoLog will automatically use it.
Note 2: In case XpoLog is running on a Windows machine, it is recommended to configure a service account on the Windows services panel and
then all Windows network logs can be scanned as local without specifying an account in the ScannerNode (path may be
\\\$\...).
Windows Account Example
SSH Account Example
Note: you should not use an id parameter in the account line in the scanner XML. In case an account already exists in XpoLog then based on the
NetAdress and Name it will be matched and re-used. In case the account does not exist it will be created during the scanner execution.
ScanDirectory Parameter
Parameter Mandatory/Optional Description Values
scanPath Mandatory The full path to the directory that is to be scanned Path
Examples
(Windows Local)
(Windows Network)
(UNIX Local / Over SSH)
ScanConfiguration Parameters
Parameter Mandatory/Optional Description Values
condenseLogsTree Optional A "true" value indicates that folders containing only one sub-folder and without logs, Boolean
will be omitted from the Folders and Logs tree.
directoriesToHide Optional A comma separated list of name expressions of folders that will not be added to the
Folders and Logs tree;their sub-folders/logs will be added.
fileSuffixesToIgnore Optional Unite logs with different suffixes into one log type (advanced)
numberOfThreads Optional The number of threads to be used as part of the scanning operation Integer
removeEmptyNodes Optional In case there are no matching files under one of the Folders and Logs members, Boolean
remove it from the Folders and Logs tree.
subdirsScanLevel Optional The number of sub-directories to scan from the given directory. Default is unlimited; Integer or
any number can be specified. "Unlimited"
scanMethod Optional 0 = Use existing configuration (file names and content) and automatic matching. 0, 1, or 2
1 = Use existing configuration (file names and content).
2 = Use existing configuration (file names only).
namePatternLogic Optional 0 = Capture each file separately (without name pattern). 0, 1, or 2
1 = Unite files with a similar names (apply name pattern automatically).
2 = Unite files with a similar suffix (apply name pattern only at the end of the file
name).
filesToInclude Optional Define which files to include in the log scan:
Using wildcard: *.log,*.txt - include all files whose name ends with .log or .txt
Using a regular expression: regexp:\w+\.log - include all files whose name is
constructed of word characters only and ends with .log, for example helloWorld.log
filesToExclude Optional Define which files to exclude in the log scan:
Using wildcard: *.zip,*.gz - exclude all files whose name ends with .zip or .gz
Using a regular expression: regexp:\w+\.tar\.gz - exclude all files whose name is
constructed of word characters only and ends with .tar.gz, for example helloWorld.tar.
gz
directoriesToExclude Optional Define which directories to exclude from the log scan:
Using wildcard: Apps* - exclude all directories whose name starts with Apps
Using a regular expression: regexp:\d\d\d\d-\d\d\-\d\d - exclude all directories
whose name is a date, for example 2013-11-26
directoriesToInclude Optional Define which directories to include in the log scan:
Using wildcard: Apps* - include all directories whose name starts with Apps
Using a regular expression: regexp:\d\d\d\d-\d\d\-\d\d - include all directories whose
name is a date, for example 2013-11-26
templatesToUse Optional The scan task will add only logs which were matched to one of the comma separated String
list of specified templates.
namePatternToApply Optional Automatically name the matched logs based on the given name pattern. Allowed String
identifiers are:
[PARENT_FOLDER n] - the name of n-th parent folder of the log
[CHILD_FOLDER n] - the name of n-th child folder of the root folder
[APPLICATION] - the name of the log''s application
[SERVER] - the name of the log''s server
[LOG] - the current name of the logtimeZone Optional Set the specified time zone on all matched logs String
onlineLogsApplication Optional Comma separated list of application name(s) that the online logs will be tagged to String
once created
assignedCollectionPolicy Optional The policy name of the collection policy that will be applied on the logs once created; String
If the parameter does not exist - the default policy will be automatically applied.
fileSuffixesToIgnore Optional Regular expression used to ignore part of the files names to define a name pattern String
enableLogsIndex* Optional True – Online logs that are added under Folders and Logs will be indexed. Boolean
False – Disable indexing.
enableLogsAnalytics* Optional True – Online logs that are added under Folders and Logs will be analyzed by Boolean
Analytics.
False – Disable Analytics.
addCollectors* Optional True – All the logs that are added by the scanner task will be collected; default policy Boolean
will be applied, unless a specific policy is specified.
False – Disables logs collection.
enableCollectedLogsAnalytics* Optional True – enables Analytics analysis on collected logs. Online logs state will be taken Boolean
from the collection policy;
relevant only when addCollectors="true".
False – Disables Analytics analysis on collected log.
enableCollectedLogsIndex* Optional True – enables indexing of collected logs. The online logs state will be taken from the Boolean
collection policy;
relevant only when addCollectors="true".
False – Disables indexing of collected logs.
collectedLogsApplication* Optional Comma separated list of application name(s) that the collected logs will be tagged to String
once created;
relevant only when addCollectors="true".
* Properties relevant only to versions 4.4 and belowExample
ScanFileFilter Parameters
Parameter Mandatory/Optional Description Values
timeInterval/timeIntervalUnit Optional The scan will add only log files with last updated time that is within the years, months, weeks,
specified time interval per log type. days, hours, mins
maxNumberOfFiles Optional The maximum number of log files that are added per log type. Integer
Examples
ScanConfApplications Parameters
Parameter Mandatory/Optional Description Values
applicationNamePattern Optional The pattern that is used to extract the application name. An application will be created
as part of the scan process.
applicationGroupNamePatter Optional The pattern that is used to extract the application group name. An application will be
n created as part of the scan process that all its sub-application are tagged to.
Example
Templates:
Please use the following examples as templates and modify accordingly (multiple directories per host can be defined by adding more
entries / multiple hosts can be defined by adding multiple entries:
Example 1 (scanner_example_Windows_logs_account_on_xpolog_service): scanner_example_Windows_logs_account_on_xpolog_service.xml
Example 2 (scanner_example_Windows_logs_using_windows_network_account): scanner_example_Windows_logs_using_windows_network_a
ccount.xmlExample 3 (scanner_example_Linux_local_logs): scanner_example_Linux_local_logs.xml
Example 4 (scanner_example_Linux_remote_logs): scanner_example_Linux_remote_logs.xml
Adding an Email Task
An Email task sends an email alert to preconfigured addresses. For example, you can configure a Keep Alive email task that sends an email
every hour to show that XpoLog is up and running. You can automate task execution by configuring a scheduler on it. XpoLog then runs the task
based on the scheduler configuration.
You can send a log as an attachment to the email alert, in either of the following export formats:
Together with its configuration, thus enabling future import into XpoLog
Transformed into an XML, CSV, or Tab delimited file.
A common use case of Email Task is testing the mail settings of XpoLog, the steps of adding an email test are the same as the steps below, the
only difference there is no need of step #11, just Save and Execute the task afterwards and check if an email was received properly. In case an
email was not received, an indication of error may be found at XpoLog system log.
Note: Ensure that your SMTP server is configured in XpoLog before creating Email Tasks and sending emails (see Settings).
To add an email task:
1. Open the Tasks console (see Tasks), and click the New Task button
OR
In the XpoLog Manager homepage, under More Actions, click Add Task.
2. In the page that opens, select the Email option, and then click Continue.
The Email Task page opens.
3. In Name, type the name of the new task.
4. In Description, type a description of the new task; optional.
5. Open the Details tab.
6. In From, type the account from which the email will be sent.
7. In To, type the account to which the email will be sent.
8. In Subject, type the subject of the email.
9. In Body, type the body of the email; optional.
10. In Data Export Format, select one of the following for the format of the email attachment:
Send an email alert without any data attached.
Attach the log together with its configuration, to enable future import.
Transform the data and attach to mail as XML, CSV or TAB Delimited.
11. To automate task execution, open the Schedule tab, and configure the scheduler as described in Scheduling a Task.
12. Click Save.
The Email task is saved.
Adding an SNMP Trap Task
An SNMP Trap task generates an SNMP trap.
To add an SNMP Trap task:
1. Open the Tasks console (see Tasks), and click the New Task button
OR
In the XpoLog Manager homepage, under More Actions, click Add Task.
2. In the page that opens, select the SNMP Trap option, and then click Continue.
The SNMP Trap Task page opens.
3. In Name, type the name of the new task.
4. In Description, type a description of the new task; optional.
5. In Connection Details, select the SNMP account to use, or click New to define a new SNMP account (see Creating an Account).
6. In SNMP Trap OID, type the OID of the SNMP trap; mandatory only for SNMP version 2 account.
7. In SNMP Community, type the name of the target community; optional; default is Public.
8. Specify the SNMP trap variables. See the Specifying SNMP Trap Variables section below.
9. To automate task execution, open the Schedule tab, and configure the scheduler as described in Scheduling a Task.
10. Click Save.
The task is saved and appears in the Tasks console in the Tasks list.
Specifying SNMP Trap Variables
You can add SNMP trap variables to the SNMP trap. You can either add a predefined SNMP, or customize a new variable of one of the following
available types:
Trap Time – the time of the SNMP trap
Log Time – the time of the event
Time – the value of a date column from the event Text – free text or the value of any column from the event. Specify the column name in square brackets.
Unsigned Integer – the value of a number column from the event
Status – the status of the event
IP Address, Integer 32, Unsigned Integer 32, Counter 32, Counter 64, Gauge 32, SMI Address, TCP Address, Time ticks, UDP
Address – the value of a column from the event. Choose the type matching the data in the event, i.e. Integer 32 for a numeric value and
IP Address for an IP address.
To add an SNMP trap variable:
1. In the SNMP Trap Task Details tab, click Add new variables .
2. Select the Select a predefined variable option, and in the adjact dropdown list, select the variable.
OR
Select the Create a custom variable option, and customize the following details for the new variable:
In OID, type the OID of the variable.
In Name, type the name of the variable.
In Description, type a description of the variable.
In Type, select the type of the variable.
In Message, type the message to be sent in the trap. Optionally, include in this dynamic field placeholders for data from the log. The
available placeholders are determined by the variable type, as follows:
For a Trap Time variable, specify the date format in the Message column, or leave empty for default date format (MM/dd/yyyy
HH:mm:ss).
For a Log Time variable, specify the date format in the Message column, or leave empty for default date format (MM/dd/yyyy HH:mm:ss).
For a Time variable, specify the column name in square brackets and add the date format after the column name (i.e: [Date;MM/dd/yyyy])
or leave empty for default date format (as configured in the log).
For a Status variable, specify the status of the event.
For other variables, specify the column name in square brackets.
3. Click Add.
The new variable is added to the Existing Variables list.
Adding an SSH Execution Task
An SSH Execution task opens an SSH connection to a remote machine and executes a command or set of commands.
To add an SSH Execution task:
1. Open the Tasks console (see Tasks), and click the New Task button
OR
In the XpoLog Manager homepage, under More Actions, click Add Task.
2. In the page that opens, select the SSH Execution option, and then click Continue.
The SSH Execution Task page opens.
3. In Name, type the name of the new task.
4. In Description, type a description of the new task; optional.
5. In Connection Details, select the account of the SSH host on which the program/script is to be
executed, or click New to define a new SSH account (see Creating an Account).
6. In Program/Script Path, type the name of the program/script to be executed by the task.
7. In Arguments, type the arguments needed for the program/script to run, separated by spaces; optional.
8. In Environment Variables, type the environment variables needed for the program/script to run; optional.
9. In Output Target File, type the the path to the file where output of the program/script execution is to be written; optional.
10. Select the Add Optional Params checkbox to include in the output file the date, account name, host name and username.
11. To automate task execution, open the Schedule tab, and configure the scheduler as described in Scheduling a Task.
12. Click Save.
The SSH Execution task is saved.
Adding a URL Task
A URL task opens a URL to a remote Web server and parameters can be passed as part of the URL to that remote server.
To add a URL task:
1. Open the Tasks console (see Tasks), and click the New Task button
OR
In the XpoLog Manager homepage, under More Actions, click Add Task.
2. In the page that opens, select the URL option, and then click Continue.
The URL Task page opens.
3. In Name, type the name of the new task.
4. In Description, type a description of the new task; optional.
5. In Connection Details, select the HTTP account in which the program/script is to be executed, or click New to define a new HTTP
account (see Creating an Account).
6. To automate task execution, open the Schedule tab, and configure the scheduler as described in Scheduling a Task.
7. Click Save.
The task is saved and appears in the Tasks console in the Tasks list.
Scheduling a Task
Each task is made up of its definition and scheduler. You can schedule a task to automatically run on a daily, weekly, or monthly basis.Tasks that are only to be manually operated do not have to be scheduled (the default).
Scheduling Daily Running of a Task
You can configure a scheduler to run a task at a set time every day, or at a specified frequency during the day – either throughout the day,
beginning at a certain hour, ending at a certain hour, or in a specified time interval.
To schedule daily running of a task:
1. In Set Frequency, select Daily.
Parameters are displayed for setting the time(s) to run the task.
2. Select one of the following two options:
Repeat every – In the adjacent textbox, type the frequency of running the task and select the appropriate unit of time. For example, 2
Hours. Optionally, select the Start at checkbox to schedule running the task from a set hour, and in the dropdown list, select that hour.
Optionally, select the Stop at checkbox to schedule running the task until a set hour, and in the dropdown list, select that hour.
Daily at – Select the exact time (HH:MM:SS) of running the task every day.
Scheduling Weekly Running of a Task
You can configure a scheduler to run a task on set day(s) every week, and at a set time or frequency on those days.
To schedule weekly running of a task:
1. In Set Frequency, select Weekly.
Parameters are displayed for setting the day(s) of the week, and the time(s) on those days to run the task.
2. Select the checkboxes of the days on which you want the task to automatically run.
3. Define the time(s) to run the task during those days (see step 2 of the Scheduling Daily Running of a Task section).
Scheduling Monthly Running of a Task
You can configure a scheduler to run a task on specific months or every month, on a specific day on those months or every day of those
specified months, and at specific hours or frequencies on those specified days.
To schedule monthly running of a task:
1. In Set Frequency, select Monthly.
Parameters are displayed for selecting the month(s), day of the month, and time(s) on those days to run the task.
2. Select the checkboxes of the months on which you want the task to automatically run.
3. In the dropdown list, select Every Day (the default) or a specific day of the month.
4. Define the time(s) to run the task during those days (see step 2 of the Scheduling Daily Running of a Task section).
Disabling Scheduling
You can disable scheduling of a task. Such tasks will only be able to run when manually operated.
To disable scheduling:
In Set Frequency, select Never.
Deleting a Task
You can delete a task that you no longer want to run in the system.
To delete a Task:
1. Open the Tasks console (see Tasks), and either
Right-click a task and in the menu that appears, click Delete
OR
Select a task from the Tasks list, and click the Delete button.
The delete confirmation box opens.
2. Click Yes.
The task is deleted from the system, and is removed from the Tasks list.
Duplicating a Task
You can create a new task based on an existing task, by duplicating an existing task, giving it another name, and description, and modifying
parameters in the tabs, as required.
To duplicate a Task:
1. Open the Tasks console (see Tasks), and either
Right-click a task and in the menu that appears, click Duplicate
OR
Select a task from the Tasks list, and click the Duplicate button.
The Add Logs Directories Task opens. The parameters in the Details and Schedule tabs are configured as in the duplicated task.
2. In Name, type the name of the new task.
3. In Description, type a description of the new task; optional.
4. Modify Details and/or Schedule tab parameters, as required.
5. Click Save.5.
The task is saved.
Editing a Task
You can edit a task.
To edit a task:
1. Open the Tasks console (see Tasks), and either
Right-click a task and in the menu that appears, click Edit
OR
Select a task from the Tasks list, and click the Edit button.
The Add Logs Directories Task opens.
2. Modify Name, Description, and the parameters in the Details and/or Schedule tab parameters, as required.
3. Click Save.
The task is updated.
Filtering the Tasks Console
You can filter the Tasks Console by task name, type, and/or description, by typing characters in the corresponding textboxes under the Name, Ty
pe, and/or Description columns.
Manually Executing a Task
You can manually execute a task, regardless of whether it is automated.
To execute a task:
Open the Tasks console (see Tasks), and either
Right-click a task and in the menu that appears, click Execute
OR
Select a task from the Tasks list, and click the Execute button.
The task is executed.
Settings
Installing and Updating Your XpoLog License
Users require a valid product license to run XpoLog. An XpoLog license is dedicated per installation server, meaning that you require a unique
license for each server on which XpoLog runs. A license must be updated when it expires or when you want to change license settings, such as
increasing the data volume handled by XpoLog. From the Settings>License menu item, you can view your license settings, update your license,
or generate a server key for a new license.
Generating a Server Key
A unique license must be applied on each deployment of XpoLog. This requires generating a server key and sending it to XpoLog. XpoLog then
creates and sends to you a license file, which you can install on the server.
To generate a server key
1. In the XpoLog Manager menu, select Settings > License.
The XpoLog Center License page opens.
2. Click generate server key.
A server key is generated for the server. Note that in case running XpoLog as a cluster the entire cluster console is generated on any of
the nodes (it is not needed to generate a key per node).
3. Copy-paste the the server key in an email to XpoLog requesting a license for the deployment, and then click OK to close the Server Key
box.
Installing a License
You can install a license onto your server from the License page, by either uploading the license file into XpoLog or copying the text of the license
file into the License File textbox.
To install a license:
1. In the XpoLog Manager menu, select Settings > License.
The XpoLog Center License page opens.
2. In the Update License section, in License File, click Browse to select the path to the license file that you want to upload.
Alternately, in License Text, paste the complete license text.
3. Click save.
The license is updated.
Configuring General Settings
Saving XpoLog Configuration
Configuring Mail SettingsConfiguring Connection Policies
Configuring Advanced General Settings
Configuring General Security Settings
In the Security tab of the General settings page in XpoLog Manager, you can configure the following security settings:
Activate security – By default, security is not activated in XpoLog, and you are not required to log in with a username and password.
You can activate XpoLog’s security mechanism by selecting this option, so that you are redirected to XpoLog’s login page, and
will be required to enter a username and password (default: admin, admin) in order to access XpoLog. In
addition, a new Security item will be added to the XpoLog Manager menu.
Session time out – By default, a session shuts down after 30 minutes of no use. You can set a different
length of time for the time out.
Login URL – the URL to which users will are redirected for login.
The Security tab also provides several different predefined authentication types:
XpoLog Realm – Usernames and passwords are managed internally by XpoLog. Any user that logs in can change their username and
password in User General Settings in the Security menu.
LDAP – Active Directory
Siteminder
WebSeal
Remote User
Note: In case your company uses a different authentication type, contact our support team for further assistance.
To configure security settings:
1. In the General Settings console, open the Security tab.
2. Select the Activate Security checkbox to require login to XpoLog with a username and password.
3. In Session time out, select the number of minutes of inactivity before XpoLog closes. Default: 30 minutes
4. In Login URL, if your organization has an external mechanism that it wants to use for validating login, type the URL; otherwise, leave the
default URL.
5. Under Authentication, in the Available Types list, select a type, and then click Add.
The selected type appears in the Selected Types list. For LDAP, SiteMinder, and WebSeal, click the configuration link to the right of
Selected Types, and complete the settings page that appears.
For details on LDAP authentication configuration, see Set up user authentication with LDAP.
For details on SiteMinder authentication configuration, see Use single sign-on (SSO) with XpoLog.
Note: You can remove a type from the Selected Types list by selecting it and clicking Remove.
6. Click Save.
The Security settings are saved.
If you activated security (in step 2), the login page appears, requiring you to enter username and password. Once you log in, a Security
tab appears in the menu.
Set up user authentication with LDAP
This section describes how to use Active Directory for authenticating users with the LDAP server.
The LDAP settings include:
General
Initial context factory
Provider URL – the connection URL to the LDAP server (you can use several URLs to multiple LDAP
servers separated by a space).
Manager Settings (optional)
Manager Path – the manager DN for searching users
Manager password – the manager’s password
Search Settings
Root path– the path for starting to search users.
In case there''s a need to search user''s information from multiple domains, it is required to enter [ALL DOMAINS] in root path of
the LDAP configuration
Search filter– how to search the users in the LDAP directory; the {0} is replaced with username.
User path – full path of the user DN; the {0} is replaced with username. For example:
uid={0},ou=people,cn=xplg
Unique id attribute – optional; which attribute of the user will be provided as the unique id of the user.
Display name attribute – optional; which attribute of the user will be provided as the display name ofthe user.
Further Settings
Group id pattern
Groups attribute
To configure Active Directory authentication:
1. In Provider URL, type the URL to the active directory server – ldap://ACTIVEDIRECTORYSERVER:389/ (for
several LDAP servers enter a space separated list of URLs)
2. In Search Filter, type sAMAccountName={0}. {0} is replaced with the username.
3. In User path, type USER_DOMAIN\{0}, where USER_DOMAIN is the domain of your users.
4. In Unique id attribute, type sAMAccountName.
5. In Display name attribute,type displayName.
6. In Groups attribute, type memberOf.
7. Click save.
The LDAP configuration is saved.
Use single sign-on (SSO) with XpoLog
This section describes how to configure XpoLog to work with your SSO solution for validating users authentication.
Configuring XpoLog to work with SSO requires that XpoLog instance which is accessed via SSO is secured behind an HTTP proxy or web agent.
The HTTP proxy you configure is then responsible for handling authentication and is the only entity capable of communicating with XpoLog.
Active Directory
XpoLog expects that your user authentication is handled by a web proxy. The web proxy server must be configured to authenticate against the
external authentication system (for example AD). Once a user has been authenticated by the proxy, the proxy must insert the authenticated user''s
username as a REMOTE_USER header in all HTTP requests forwarded to XpoLog.
XpoLog accepts incoming HTTP requests which include a REMOTE_USER header from a trusted proxy. If the user in the REMOTE_USER header is
not currently authenticated by XpoLog, an authentication request is made to XpoLog via a trusted authentication endpoint the XpoLog process
provides. If REMOTE_USER is not provided in every request, the REMOTE_USER is assumed to not be authenticated and will receive a
XpoLog login screen.
Note: If your proxy uses some other remote user header name besides REMOTE_USER, you can change the name of the header as described
below:
The settings include:
General
User header key - key used by the trusted authentication endpoint on authenticated users in the HTTP header (comma
separated list. For example: REMOTE_USER)
XpoLog uses the header key(s) to validate the user''s authentication and to retrieve information regarding the user. If more than
one key is provided, XpoLog will use the keys one by one to try and retrieve the information.
Protected URLs - a list of the trusted authentication endpoint(s) which XpoLog will allow authentication from (comma separated
list, wild card supported).
Click save. The SSO configuration is saved.
Set up a proxy server
XpoLog SSO implementation supports most proxy servers. The proxy server must handle its own authentication and
must insert the authorized username into a REMOTE_USER (or equivalent) header for all HTTP requests it forwards
to XpoLog.
Site Minder
XpoLog''s integration to SiteMinder supports a scenario where there are SiteMinder''s web agents in-front of XpoLog. Users are performing the
login operation directly against the SiteMinder, and then being redirected to XpoLog. XpoLog is validating the user''s authentication and retrieving
the information from SiteMinder.
The SiteMinder settings include:
General
User header key - key used by the SiteMinder on authenticated users in case where information can be retrieved from the HTTP
header (comma separated list. For example: HTTP_SM_USER, HTTP_UID)
XpoLog uses the header key(s) to validate the user''s authentication and to retrieve information regarding the user. If more than
one key is provided, XpoLog will use the keys one by one to try and retrieve the information.Client cookie name - cookie name used by the SiteMinder on authenticated users in case where information can be retrieved
from a cookie (for example: SMSESSION)
XpoLog uses the cookie name to validate the user''s authentication and to retrieve information regarding the user.
Protected URLs - a list of the protected SiteMinder web agents URLs which XpoLog will allow authentication from (comma
separated list, wild card supported).
Group header key - key used by the SiteMinder, used in order to retrieve from the HTTP header information regarding the
authenticated user''s group(s).
XpoLog is using the header key(s) to retrieve information regarding the user''s group(s). If more than one key is provided, XpoLog
will use the keys one by one to try and retrieve the information.
Group id pattern - used if a specific value should be retrieved from the authenticated user''s group.
User HTTP request key - key used by the SiteMinder on authenticated users in case where information can be retrieved directly
from the HTTP request (comma separated list. For example: HTTP_SM_USER, HTTP_UID)
XpoLog is using the request key(s) to validate the user''s authentication and to retrieve information regarding the user. If more
than one key is provided, XpoLog will use the keys one by one to try and retrieve the information.
Click save. The SiteMinder configuration is saved.
Configuring Mail Settings
In order for XpoLog to send emails, you must allocate an SMTP mail server and configure it.
To configure mail settings:
1. In the General Settings console, open the Mail tab.
2. In SMTP Host, type the SMTP host address that XpoLog is to use to send emails.
3. In SMTP Port, type the port that the given SMTP host is listening on.
4. In System Email Address, type the default/system ‘From’ email address that is to be used when sending emails.
5. In Administrator Email Address, type the email address of XpoLog’s administrator, where system notifications such as disk
space, violation messages, and more, are to be sent.
6. If the SMTP requires authentication, select the Use SMTP Authentication checkbox. In this case, provide SMTP Username and SMTP
Password, and indicate whether or not to Use TLS/SSL.
7. To test that the mail settings are correct and usable, click the Test Mail Settings link, enter a valid email address to which a test
message should be sent, and click the Send Message button. If an error message appears, fix the relevant setting based on the error
message and run another test.
8. Click the Save button.
The mail settings are saved.
·
Use Case: GMAIL as SMTP server in XpoLog
In order to use your GMAIL as the mail server that XpoLog uses to send emails, please use the following settings:
1. In the General Settings console, open the Mail tab
2. In SMTP Host, type smtp.gmail.com
3. In SMTP Port, type 465
4. System Email Address - typically this is the default/system ‘From’ email address that is to be used when sending emails from XpoLog.
However, when using GMAIL the ''From'' email address will always be the email address of the GMAIL account you''re using in these
settings. GMAIL does not allow a custom email address to be used as the ''From'' email address.
5. In Administrator Email Address, type the email address of XpoLog’s administrator, where system notifications such as disk
space, violation messages, and more, are to be sent.
6. Check the Use SMTP Authentication check-box.
7. Provide SMTP Username and SMTP Password (your GMAIL user and password)
8. Check the Use TLS/SSL check-box
9. Click the Save button.
The GMAIL settings are saved.
Configuring Connection Policies
In the Connection Policies tab of the general settings, you can configure the default SSH account connection policy that is to be applied on any
SSH account in XpoLog. This determines the default behavior of SSH activity between XpoLog and remote UNIX machines.
To configure connection policies:
1. In the General Settings console, open the Connection Policies tab.
2. In Connection pool timeout interval, type the allowed period of connections inactivity before a pool is closed, selecting minutes or hou
rs as the unit of time. Type 1 and select Minutes to close the pool as soon as possible. Default: 5 minutes.
3.3. In User session timeout interval, type the allowed period of user inactivity before a log is closed, selecting minutes or hours as the
unit of time. Leave blank for unlimited inactivity period. Default: unlimited.
4. In Number of connections allowed to remote machine, type the allowed number of connections to a remote machine (default:
unlimited). Leave blank for an unlimited number of connections.
5. In Number of sessions allowed per connection, type the number of sessions that can be opened using the same connection. Default:
7
6. In Number of pooled sessions allowed, type the number of sessions that can be pooled. Leave blank to disable session pooling.
Note: Session pool is required only if there is a relatively high number of log types on a single remote servers (>50) - in that case if there
will not be enough allowed sessions to manage all logs from the server in parallel a pool will optimize the process. Furthermore, some
machines may block connectivity from XpoLog when sessions will constantly be opened from XpoLog for collecting data, in such cases
pool is recommended. For most common cases this is not required.
7. Click Save.
The Connection Policies settings are saved.
Recommendation: In general XpoLog requires 3 sessions to process a single log type from a remote server. I.E. a limitation of 10 connections (7
sessions each) will be sufficient to process about 25 log types from a remote server which should be sufficient for most cases.
Configuring Advanced General Settings
In the XpoLog Manager General settings Advanced tab, you can:
Specify the business hours and non business hours - Define if XpoLog should distinguish between business and non-business hours
and specify the desired hours.
Upload custom functions – Import a JAR of custom functions defined by the XpoLog team.
User Time Zone Mode – set the policy that XpoLog uses to apply time zone for users entering the system from different locations.
To configure advanced settings:
1. In the General Settings console, open the Advanced tab.
2. Under Business Hours enable/disable whether XpoLog should distinguish between business and non-business hours and specify the
desired hours. This is used by XpoLog mainly for data monitoring.
3. Under Custom Functions, click here to upload the report functions JAR to XpoLog Center.
4. Under User Time Zone Mode, there are 3 options that can be used:
a. Apply the system time zone (default) -
XpoLog instances have a global, cross-system, time zone. By default the global time zone is taken from the machine that
XpoLog is running on – this time zone is the baseline and all dates and times selections/interpretations are done based on it
(user interface always refers to the system time zone – last hour selection is interpreted to the last hour of the system time zone).
Each log source, by default, gets the system time zone. I.E. XpoLog assumes that the log data is written in the same time zone.
A log source specific time zone may be defined in the log regional settings configuration and, in case different than the
system time zone, XpoLog will interpret the log records timestamps to the system time zone.
b. Apply dynamic time zone -
By selecting this option, XpoLog will try to automatically retrieve a the user''s specific time zone, from the user client, on log in. If
the retrieved time zone of the user is different than the system time zone, the user will automatically see all dates and times
selections/interpretations to the retrieved time zone and not as the system time zone.
Note: Even if this option is selected, all scheduled / system tasks will be executed based on the system time zone.
c. Apply AppTags time zone -
By selecting this option, an option to configure a specific time zone per AppTag will be available. Each AppTag will have its own
designated time zone configured and if a user is allowed to see a specific AppTag''s data then that AppTag’s time zone will be
automatically applied, regardless of the user’s location.
If a user is allowed to see multiple AppTag’s with the same time zone then that time zone will be applied, if the user is allowed to
see more than one AppTag with different time zones then the system time zone will be automatically used instead.
Note: Even if this option is selected, all scheduled / system tasks will be executed based on the system time zone.
Saving XpoLog Configuration
In the General Settings tab, you can do the following:
Save the entire configuration used by XpoLog.
Change the ports used by XpoLog for HTTP/HTTPS.
Saving the XpoLog Configuration
It is recommended to save in a directory, the entire configuration that XpoLog uses. This directory should be external to the
installation directory, and XpoLog should be granted full permissions on it. See Post Installation Recommendations f
or full explanation.
In cases where more than one XpoLog instance shares the same configuration, you also have the option of
specifying that XpoLog run in Cluster mode, so that the XpoLog instances are aware of each other. See XpoLogCluster Installation for full explanation.
To save the entire XpoLog configuration:
1. In the XpoLog General Settings console, open the General tab.
2. Select the Use external configuration directory checkbox, and in Configuration full path, type the absolute path to a directory that
XpoLog can use to store and manage its configuration and analysis. Example: C:/XpoLogConfig/.
3. If you want XpoLog to run in cluster mode, select the Cluster mode check-box. This is required only if XpoLog is installed in a cluster
with multiple instances.
4. If you want XpoLog to run as agent where ALL system activities are disabled select the Agent Mode check-box. Mainly used for Remote
XpoLog in Agent mode or LogSync.
5. Click Save.
6. The XpoLog configuration is saved in the specified directory.
7. Restart XpoLog.
Changing the HTTP/HTTPS Ports
The default HTTP port is 30303; default HTTPS port is 30443. You can change the ports used by XpoLog for HTTP/HTTPS.
To change the HTTP/HTTPS port:
1. In the XpoLog General Settings General tab, in HTTP port, type the port to be used by XpoLog for HTTP, and/or in HTTPS port, type
the port to be used by XpoLog for HTTPS.
2. Click Save.
The ports configuration is saved.
Disabling HTTP/HTTPS Access to XpoLog
To disable the HTTP/HTTPS access to XpoLog it is required to edit the servlet container configuration that XpoLog uses (standalone installations
only, if XpoLog is deployed on a different application server then it has to be done on the application server level):
1. Stop XpoLog
2. Go to XPOLOG_INSTALLATION_DIRECTORY/ServletContainer/conf/ and edit the file server.xml
3. Follow these:
a. To disable HTTP comment the line:
b. To disable HTTPS comment the line:
4. Save the modification and restart XpoLog.
Note: XpoLog will not be accessible on the disabled protocol and port (also consider modifying XpoLog agents account URLs if required).
HTTPS Certificate in XpoLog
XpoLog is not shipped with a certificate. These could leave you vulnerable, because the default certificate is the same in every XpoLog download.
Data encryption (HTTPS) can be easily used in XpoLog. Keep in mind that encryption with the default certificate is not fully secure and you''re
encouraged to create and replace it with your organization''s trusted CA certificate.
For better security, replace the default certificates with certificates signed by a trusted CA. We strongly recommend using CA certs (note that a
self-signed certificate is considered untrusted by users'' browsers).
XpoLog standalone installation runs on a Tomcat, for more information about installing a certificate please refer to https://tomcat.apache.org/tomc
at-8.0-doc/ssl-howto.html
Configuring SNMP Settings
In order for XpoLog to send SNMP Traps (mainly from System Status Console), you must allocate an SNMP server and configure it. View the XpoLog management information base (MIB) here
To configure SNMP settings:
1. In the General Settings console, open the SNMP tab.
2. In SNMP Host, type the SNMP host address that XpoLog is to use to send traps to.
3. In SNMP Port, type the port that the given SNMP host is listening on.
4. In Version, select Version 1 or Version 2 based on your SNMP server.
5. check Proxy if needed (Default: un-checked).
6. Click the Save button.
The SNMP settings are saved.
XpoLog Alert MIB
Mib Link
Download mib file
Mib source
-- XpoLog Center System Event MIB
-- SNMP V2 Version
XPOLOG-CENTER-SYSTEM-ALERT-MIB
DEFINITIONS ::= BEGIN
IMPORTS
enterprises FROM SNMPv2-SMI
MODULE-IDENTITY FROM SNMPv2-SMI
DisplayString FROM SNMPv2-TC
OBJECT-TYPE FROM SNMPv2-SMI
NOTIFICATION-TYPE FROM SNMPv2-SMI;
---------------------------------------------------------------------------
--
-- XpoLog Module Identity
--
---------------------------------------------------------------------------
xpoLogMIB MODULE-IDENTITY
LAST-UPDATED "201508010000Z"
ORGANIZATION "XpoLog Ltd."
CONTACT-INFO "XpoLog Ltd. http://www.xpolog.com"
DESCRIPTION "SNMPV2 MIB Module for XpoLog Center System Alerts."
REVISION "201508010000Z"
DESCRIPTION "XpoLog Center System Alerts MIB"
::= { enterprises 45222 }
---------------------------------------------------------------------------
--
-- XpoLog Center Tree Structure
--
---------------------------------------------------------------------------
xpoLog OBJECT IDENTIFIER ::= { enterprises 45222 }
products OBJECT IDENTIFIER ::= { xpoLog 1 }
xpoLogCenter OBJECT IDENTIFIER ::= { products 1 }
xpoLogCenterSystemAlert OBJECT IDENTIFIER ::= { xpoLogCenter 1 }
---------------------------------------------------------------------------
--
-- XpoLog Center System Alert objects and structures
--
---------------------------------------------------------------------------
xpoLogCenterSystemAlertKey OBJECT-TYPE
SYNTAX DisplayString
MAX-ACCESS read-only
STATUS current
DESCRIPTION "The System Alert key."
::= { xpoLogCenterSystemAlert 1 }xpoLogCenterSystemAlertTime OBJECT-TYPE
SYNTAX DisplayString
MAX-ACCESS read-only
STATUS current
DESCRIPTION "The System Alert time."
::= { xpoLogCenterSystemAlert 2 }
xpoLogCenterSystemAlertLevel OBJECT-TYPE
SYNTAX DisplayString
MAX-ACCESS read-only
STATUS current
DESCRIPTION "The System Alert level."
::= { xpoLogCenterSystemAlert 3 }
xpoLogCenterSystemAlertSource OBJECT-TYPE
SYNTAX DisplayString
MAX-ACCESS read-only
STATUS current
DESCRIPTION "The System Alert source."
::= { xpoLogCenterSystemAlert 4 }
xpoLogCenterSystemAlertType OBJECT-TYPE
SYNTAX DisplayString
MAX-ACCESS read-only
STATUS current
DESCRIPTION "The System Alert type."
::= { xpoLogCenterSystemAlert 5 }
xpoLogCenterSystemAlertInstance OBJECT-TYPE
SYNTAX DisplayString
MAX-ACCESS read-only
STATUS current
DESCRIPTION "The System Alert instance."
::= { xpoLogCenterSystemAlert 6 }
xpoLogCenterSystemAlertSubject OBJECT-TYPE
SYNTAX DisplayString
MAX-ACCESS read-only
STATUS current
DESCRIPTION "The System Alert subject."
::= { xpoLogCenterSystemAlert 7 }
systemAlertNotification NOTIFICATION-TYPE
OBJECTS {
xpoLogCenterSystemAlertKey,
xpoLogCenterSystemAlertTime,
xpoLogCenterSystemAlertLevel,
xpoLogCenterSystemAlertSource,
xpoLogCenterSystemAlertType,
xpoLogCenterSystemAlertInstance,
xpoLogCenterSystemAlertSubject
}
STATUS current
DESCRIPTION "Notification when an XpoLog Center system alert is triggered."
::= { xpoLog 0 1 }
---------------------------------------------------------------------------
-- End of XPOLOG-CENTER-SYSTEM-ALERT-MIB DEFINITIONS
---------------------------------------------------------------------------
END
Configuring Connectivity to a Bug Tracking System
XpoLog provides integration to two common bug tracking systems: Bugzilla by Mozilla and JIRA by Atlassian. You can configure the connectivity
details to an available Bugzilla or JIRA bug tracking system that you have in your organization. This enables you to publish events in your log
viewer to the bug tracking system that XpoLog is connected to.
To connect XpoLog to a bug tracking system:
1. In the XpoLog Manager menu, select Settings > Bug Tracking System.
The Bug Tracking Systems page opens, displaying a list of bug tracking systems with connectivity configured in XpoLog.
2.2. Click the New Bug Tracking System button.
3. In the page that appears, select the bug tracking system to add: Bugzilla or JIRA, and then click Continue.
For Bugzilla, continue as in Configuring Connectivity to Bugzilla.
For JIRA, continue as in Configuring Connectivity to JIRA.
Note: Following configuration of a bug tracking system, it is recommended to verify its connectivity. See Verifying Bug Tracking System
Connectivity.
Configuring Connectivity to Bugzilla
To configure connectivity to Bugzilla:
1. In Name, type a name for the Bugzilla system that you are connecting to XpoLog; mandatory. Default: Bugzilla
2. In Description, type a short description of the Bugzilla system to which you are connecting XpoLog; optional.
3. In Bugzilla URL, type the URL of the Bugzilla bug tracking system. For example: http://BUGZILLA_HOST:8080/bugzilla; mandatory.
4. Click Save.
The new bug tracking system appears in the table in the Bug Tracking Systems page.
Configuring Connectivity to JIRA
To configure connectivity to JIRA:
1. In Name, type a name for the JIRA system that you are connecting to XpoLog; mandatory. Default: JIRA
2. In Description, type a short description of the JIRA system to which you are connecting XpoLog; optional.
3. In JIRA URL, type the URL of the JIRA bug tracking system. For example: http://JIRA_HOST:8080; mandatory.
4. In Port Name, type the name of the JIRA web service port; mandatory. Default: JirasoapserviceV2
5. Click Save.
The new bug tracking system appears in the table in the Bug Tracking Systems page.
Editing Bug Tracking System
You can modify the configuration of a bug tracking system, including its name, descripton, URL, or port (for JIRA) of a bug tracking system.
To modify a bug tracking system:
1. In the Bug Tracking Systems page, highlight a system in the table, and then click the Edit button.
The details of the bug tracking system are displayed.
2. Modify the parameter values, as required. See Configuring Connectivity to a Bug Tracking System.
3. Click Save.
The Bug Tracking Systems page is displayed, with the modified bug tracking system configuration updated n the table.
Removing a Bug Tracking System
You can remove from XpoLog a bug tracking system that you no longer require.
To remove a bug tracking system from XpoLog:
1. In the Bug Tracking Systems page, highlight a bug tracking system, and then click the Delete button.
A Delete Confirmation box is displayed, requesting confirmation of removal.
2. Click Yes.
The bug tracking system is removed from the table on the Bug Tracking Systems page.
Verifying Bug Tracking System Connectivity
You can verify the connectivity of any bug tracking system defined in XpoLog.
To verify connectivity:
1. In the Bug Tracking Systems page, highlight a bug tracking system in the table, and then click the Verify button.
The Bug Tracking System Verification box opens. It either informs of successful connectivity, or lists (in red) problems with connectivity,
and suggested recommendations.
2. In te Bug Tracking System Verification box, click OK.
The Bug Tracking System Verification box closes.
Adding Environment Tables
In the XpoLog Environment Settings page, you can define tables with variables that can be used throughout the XpoLog system. You can refer to
this environment table in any field by enclosing it in brace brackets and preceding it witha $ sign, as follows: ${table name:value:key:key name}.
The advantage of using environment tables is that when the value of the variable changes, you only have to change it in the Environment Settings
page, instead of every place that it appears in the XpoLog system. For example, you can add an environment table that contains the path to alogs folder, so that for every log that you add, you only need to put the environment variable’s name in the Path field, and not the full path. This
way, if the path to the log folder changes, you only need to edit the environment variable’s value in one place - on the XpoLog
Environment Settings page, thus ensuring that all the logs remain functional.
For example, if the logs are located under the directory "/opt/logs/", you can save an environment variable for that location using the name logs.
home.
When you add logs to XpoLog, you can type ${logs.home} in the log path, instead of the actual path.
If the logs directory is transferred elsewhere, you can simply update the environment variable with the new path. All the logs which use ${logs.ho
me} will automatically have the updated value.
Another example, if you have reports that should be generated based on 2 shifts (day shift and night shift), you can save an environment table
called “Shifts” in which you will have 2 keys, 1of2 with the value 8-20 and 2of2 with the value 20-8
When you run searches in the XpoLog Search Engine, you can type ${shifts:time:key:1of2} or ${shifts:time:key:2of2} in the search query,
instead of the actual time value.
If the value of the shifts is changed, you can simply update the environment table with the new time value. All the queries which use ${shifts:time
:key:1of2} or ${shifts:time:key:2of2} will automatically use the updated value.
To add an environment table:
1. In the XpoLog Manager menu, select Settings > Environment Tables.
The XpoLog environment settings page opens.
2. Click new environment table.
3. In the name and value fields that appear, type the name of the environment table and its value, and then click save.
Removing an Environment Table
You can remove any environment table that you no longer require in the system.
To remove an environment table:
1. In the XpoLog environment settings page, click the remove link adjacent to each environment table that you want to delete.
2. Click save.
The environment tables are removed from the system.
Viewing the Audit Log
XpoLog contains a comprehensive audit mechanism, which lists in a log every operation that users perform in XpoLog from login until logout.
From the Settings > Audit menu item in the XpoLog Log Viewer, you can run audit filtering of users'' activities.
The available filters are:
Time Frames – for specifying the time frame of the audit view
Audit Types – for specifying the types of actions to view
General – for viewing a specific user''s audit (User Name) or a general term, such as action type, component name, or action
description (General)
Note: Althought not recommended, you can view the audit log on all data, without filtering.
To run audit filtering:
1. In the XpoLog Manager menu, select Settings > Audit.
The System Audit page opens.
2. Filter the Audit log by time frame, audit type, user name, or general term (see following sections for details).
3. Click generate.
The filtered audit log view is displayed in the Log Viewer.
Filtering by Time Frame
You can filter the audit log to show records from before or after a specific date, or between two specific dates. Alternately, you can show records
from a time relative to the current time: from the last minutes, hours, or days; from the previous days, weeks, or months; or from a specified
number of days or hours ago for the duration of a specific number of days of hours.
To show records from a specific time period:
Select the Dates limit option, and do one of the following:
Show audit records from a specific date – Select the show records that arrive after checkbox, and click calendar to select a
date from the calendar. Default: todays'' date.
Show audit records until a specific date – Select the show records that arrive before checkbox, and click calendar to select a
date from the calendar. Default: todays'' date.
Show audit records between two dates – Select the show records that arrive after and the show records that arrive before c
heckboxes, and click calendar to select the range of dates from the calendar. Default: todays'' date.
To show records from a time period relative to the current date:
Select the show records option, and select one of the following options from the dropdown box:
Select from – Show records from a specific number of hours or days, for a specific number of hours or days.
Select from the last, and input the number of days, hours, or minutes – Show most recent records, as specified.
Select from the previous, and input the number of days, weeks, or months – Show previous records, as specified. Filtering by Audit Type
You can filter the audit log view to show any or all of the following actions performed by users:
Logins/Logouts
View system components
Change system components
Note: At least one audit type should be selected.
To filter by audit type:
In the Audit Types section, select the checkboxes of the type of actions to include in the audit view: Logins/Logouts, View system
components, or Change system components.
General Filtering
You can filter the audit log view to show operations performed by a specific user, and/or that include a specific term.
To filter by user name or general term:
1. In the General section, in User Name, type a user name to generate an audit view of a specific user, or leave empty for audit view of all
users.
2. In General, type a specific term to include in the audit view only operations including this term.
Viewing XpoLog Version and System Patches
The Settings > About console displays the following:
Version Information: Includes the version of XpoLog Center installed on your computer, as well as the latest version available. If your
version is not the latest, you can download the latest version at www.xpolog.com
Support Information: Includes a link to the XpoLog Center support portal
Patches: Lists patches that have been made to the system, including their build, date of deployment, and description.
From this console, you can:
Access the XpoLog Center Support Portal, where you can manage system logs, and view detailed system information.
Export a system report.
Publish a patch to local and remote XpoLogs.
To view version and patch information:
In the XpoLog Manager menu, select Settings > About.
The About XpoLog Center page opens.
Accessing the Support Portal
The XpoLog Center Support Portal includes two tabs:
Systems Logs – includes detailed information on XpoLog logs
System Information – includes detailed system information, XpoLog information, basic information, and license information
To access the support portal:
In the About XpoLog Center console, click Open XpoLog Center Support Portal.
The Support portal opens.
Accessing the XpoLog Support Portal
From the About XpoLog Center page, you can access the XpoLog Center Support Portal, where you can manage system logs and view detailed
system information.
To open the XpoLog Support portal:
In the About XpoLog Center page, under Support Information, click Open XpoLog Center Support Portal.
The Support portal opens. In the System Logs tab, you can manage logs; in the System Information tab, you can see detailed
information on the system.
Exporting the System Support Report
You can export a report of system support information to an XML file.
To export a system support report:
1. In the About XpoLog Center page, click the Export System Report link.
Support information is prepared for the report. This may take a few moments.
2. When the report is prepared, save the XML file.2.
Installing the Latest Trial Version
The About XpoLog Center console shows the installed XpoLog version, as well as the latest available version. If you do not have the latest
version, you can install our free 30-day trial version.
To install the latest trial version:
In the About XpoLog Center console, click XpoLog Website.
The XpoLog Center website opens in a separate window, from where you can download the trial version.
Publishing a Patch
You can distribute a patch to all XpoLog cluster nodes and remote XpoLog nodes that exist in your enterprise from a single location
To publish a patch:
1. In the About XpoLog Center page (Manager > Settings > About), click publish patch.
The Publish Patch page appears.
2. In Path, click Browse to select the XpoLog patch zip file to be published.
3. Select the nodes which you want to publish the patch to:
Current Node: the current XpoLog which the browser is now opened on
Cluster Nodes: the nodes which are part of the cluster that the browser is now opened on (appears only if a cluster is active).
Enterprise Nodes: select the remote nodes to which you want to publish the patch. You can click select all to select all nodes (appears
only if there are any remote enterprise accounts).
4. Click run.
The patch is applied on the all the selected XpoLog nodes.
Log Viewer Settings
The Log Viewer Settings allows system administrators to configure a few parameters that impact the Viewer''s behavior:
Log Viewer Records Per Page
Log viewer records per page: a comma separated list of numbers which will be available in the viewer when opening a log. Default values are
25, 50, 100, 250, 500 and may be modified here.
Note that using high number of records per page may impact browser performance.
Default number of records per page: The default number log records which will be presented when opening a log in the viewer. This value
should be one of the available options of the ''log viewer records per page'' list above.
Default Log Search Settings
Default Search: When checked - a search and filter result will be presented from the beginning of the logs (Note: when unchecked the result that
will be presented is on current view from current location).
Default Tail Settings
Tail refresh rate: you can set the refresh rate of tailing (in seconds). Note: when activating the tail on log viewer XpoLog is loading the new
records, if available, according to the specified refresh rate.
Tail number of records view: while tailing XpoLog will accumulate the selected number of records in log viewer. Note: after the view will reach
the selected number of records, XpoLog will restart the accumulation in a new page.
Search Context Menu
Context menu may be used when right-clicking a log column value in the log viewer (under the option ''Search''). By using the placeholder
''[LOG_COL_DATA]'' you can use the right-clicked value in an external site.
For example an internal organization site that can identify information from the logs, XpoLog Search engine, Google, etc.
Sort Folders and Logs Alphabetically
By default, Folders and Logs that are created in XpoLog are sorted by creation. If you wish to apply an alphabetical sorting of the entire tree,
follow these steps:
1. Open a browser to XpoLog, login if required.
2. Go to Manager > Settings > About, click the Open XpoLog Center Support Portal link - a new browser/tab will be opened displaying the
support console.
3. Select Advanced Settings at the top of the screen select box.
4. Search for the property ''module.sortMembersAlphabetically'' (set to false by default).
5.5. Right click > Edit it and enter true as the custom value.
6. Save and restart XpoLog.
The Folders and Logs tree will be displayed sorted alphabetically.
Security
The Security menu is displayed in the XpoLog Manager menu, only if security has been activated in the General Settings Security tab (see Config
uring General Security Settings).
Using the Security mechanism of XpoLog, Administrators do the following:
Create groups, which are composed of users or groups that have the same permissions and tasks in the system, and manage them (see
Managing Groups).
Create policies that define permissions that can be assigned to the various groups or users for performing activities in the system, and
manage them (see Managing Policies).
Create Users and associate them with groups and assign to them a policy, and manage them (see Managing Users).
Change their password or display name (see Changing Your Password or Display Name).
If your organization uses an internal XpoLog realm to authenticate Users, Administrators can do all of the above. However, if your
organization uses an external XpoLog realm for assigning usernames and passwords to users and for
authentication, Administrators can only create Groups and Policies.
Changing Your Password or Display Name
In the Security > User General Settings page, you can change your password and display name (used whenever the user is displayed, as in
administration or in the welcome message).
Note: The default user name and password of the Administrator are: admin
To change your password or display name:
1. In the XpoLog Manager menu, click Security > User General Settings.
The User General Settings console opens.
2. In Old Password, type your current password.
3. In New Password, type your new password, and retype it in Confirm Password.
4. In Display Name, type the name to appear whenever your name is displayed in XpoLog.
5. Click Save.
Your new settings are saved throughout the system.
Note regarding passwords encryption and decryption
All the encryption and decryption of passwords in XpoLog are based on JCE (Java Cryptography Extension). The
default encryption/decryption is using “DES” algorithm with SUN JCE Provider that uses ECB as the default mode
and PKCS5 Padding (this is configurable in XpoLog).
Available encryption algorithms:
- DES: 56 bits
- Triple DES: 112 bits- Blowfish: 56 bits
- HmacMD5: 64 bytes
- HmacSHA1: 64 bytes
Apply Permissions on a Folder or Log
A policy is assigned to each user/group of XpoLog to define the permissions of that user/group members in the
system. The policy includes permissions for viewing and editing logs and folders and applying different operations in
the system. In addition, for a specific log/folder/application, XpoLog enables Administrators to edit the permissions
granted to a user/group ,so that the log/folder/application will be exposed to them or not.
It lets the Administrator choose one of the following to define the permissions of users on the folder/log:
Use parent permissions – the folder/log inherits permissions from its parent folder.
Use application permissions – the folder/log has the permissions defined in the application which the folder/log is tagged to.
Use specified permissions – the folder/log has the permissions that you assign on this page.
To edit permissions of a folder or log:
1. In the XpoLog menu, click Administration > Folders and Logs, select a folder or log, and then click the Per
missions button.
Alternately, in the Log Viewer left pane, under the Folders and Logs menu, right-click a log, and click Edit Permissions.
The Permissions console opens.
2. Select one of the following options:
Use parent permissions
Use specified permissions
3. Under Edit Group Members, in the Available Members list, select a member that you want to be able to view and edit
the folder/log, and click Add.
The member is moved to the Selected Members list.
4. Repeat step 3 for each user/group that is to be permitted to view and edit the folder/log.
Note: You can remove a user/group from the Selected Members list by selecting it and clicking Remove. It then returns to the Available
Members list.
5. Under View Group Members, in the Available Members list, select a member that you want to be able to view only the folder/log, and
click Add.
The group is moved to the Selected Members list.
6. Repeat step 5 for each user/group that is to be permitted to view only the folder/log.
Note: You can remove a group from the Selected Members list by selecting it and clicking Remove. It then returns to the Available
Members list.
7. Click Apply.
The permissions are applied on the selected folder/log.
LDAP/AD Authentication
In case the authentication of users is done against an LDAP or Active Directory (see LDAP/AD Authentication) it is
possible to assign permissions on groups which are defined in the organizational LDAP/AD.
Add groups in XpoLog - XpoLog>Security>Groups, for each new group set its name to be the exact name as it is in
the LDAP/AD* server, no need to change anything inside it will be done automatically. Set the relevant policy from
the policies list to the created group. In case no policy will be selected; the default policy will be applied to the the
authenticated user which is associated to this group.
Specify on each Folder/Log/Application which group is allowed to view it as described above (make sure the All
group is removed from the top Folders and Logs and other Folders/Logs).
When a user will sign in to XpoLog (authenticating against the LDAP/AD) XpoLog will match the groups retrieved
from the LDAP/AD and will look to a matched group which is defined internally - in case such a match exists the
user will be enforced wit the group''s policy and permissions (based on the group''s policy) automatically.
XpoLog audit (audit log) the list of groups that the authenticated user is associated to in the LDAP/AD server. After a
user signs in, you can check the list of groups in the audit log in order to create matching groups internally in
XpoLog.
Managing Users
Note: Users can be managed in XpoLog, only if an internal XpoLog realm is being used to define and authenticate users.
From the Security > Users console, Administrators can:View a listing of all users in XpoLog, and filter the list to display users from a specific group.
Create a user.
Modify or view a user''s settings and lock a user.
Unlock a user.
Remove a user from XpoLog.
An XpoLog user must be assigned to at least one group. This way, permissions can be assigned to the user at a group level, as opposed to at a
user level.
Viewing and Filtering XpoLog Users
In the Security > Users console, Administrators can view a listing of all users defined in XpoLog, or in a specific group.
The Users list can be filtered to display users from a specific group. This is especially useful when there are many users defined in the system,
and you want to quickly find the user so that you can view or modify its settings, or delete it.
Viewing XpoLog Users
To view users defined in XpoLog:
In XpoLog Manager, click Security > Users.
The Users console opens, listing all the users that are defined in XpoLog.
Filtering Users Display
To filter the users display:
In the Users console, select from the Groups drop-down list, a group.
The users from the selected group are displayed under Group Users.
Adding a New User
In the Users Settings console, Administrators can add new users to XpoLog.
For each new user, the Administrator can:
Assign a username and password.
Associate the user with a user group that is defined in the system. Associating a user with a group enables giving permissions to a
user at a group level, instead of at a user level.
Select the user groups that are to be under the new user''s administration.
Set the policy used by the user – either the policy of the groups that the the user is associated with or another policy.
Note: At least one associated group must be selected for a new User.
To add a new User to XpoLog:
1. In the XpoLog Manager menu, click Security > Users.
The Users console opens.
2. In the Users console, click Add User.
The Users Settings console opens.
3. In Username, type a username for identifying the new user.
4. In Password, type a password for the new user, and in Confirm Password, retype the password.
5. In Display Name, type the name to be displayed across the system for this user.
6. Under Associated Groups List, in Available Groups, select a group that you want the user to be
associated with, and click Add.
The group is moved to the Associated Groups list.
7. Repeat step 6 for each group that you want the user to be associated with.
Note: You can remove a group from the Associated Groups list by selecting it and clicking Remove. It then
returns to the Available Groups list.
8. Under Administered Groups List, in Available Groups, select the group that you want the user to
administer, and click Add.
The group is moved to the Administered Groups list.
9. Repeat step 8 for each group that you want the user to administer.
Note: You can remove a group from the Administered Groups list by selecting it and clicking Remove. It
then returns to the Available Groups list.
10. Under Policy Settings, select either of the following options:
Use the Policy of the selected groups10.
Use the following Policy; in the adjacent drop-down list, select one of the policies that is defined in the
system.
11. Click Save.
The Operation Done page appears with the message: "Users operation ended successfully".
12. Click ok.
The Users console opens with the new User on the list.
Modifying/Viewing XpoLog User Settings
1.
From the Security > Users console, Administrators can view or modify the settings of any XpoLog User, with the exception of their Username and
Password. An administrator can also lock a user.
Viewing User Settings
Administrators can view the settings of any user.
To view user settings:
1. In the XpoLog Manager menu, click Security > Users.
The Users console opens.
2. In the Users console, optionally filter the Users display, by selecting in Groups, a group.
The users of the selected group are displayed on the console.
3. Click the Edit link adjacent to the group.
The User Settings console opens, displaying the settings of the selected user.
4. Click Cancel.
The Users console opens, displaying all the users in the system.
Modifying User Settings
Administrators can modify the following user settings:
Display Name of the user
Groups to which the user is associated
Groups that this user administers
Policy used by the user
To modify a user''s settings:
1. In the XpoLog Manager menu, click Security > Users.
The Users console opens.
2. In the Users console, optionally filter the Users display, by selecting in Groups, a group.
The users of the selected group are displayed on the console.
3. Under Group Users, click Edit on the row of the User whose settings you want to modify.
The User Settings console opens.
4. Modify the Display Name, and the selected User Groups, Administered Groups, and Policy, as required. See Adding a New User for
details.
5. To lock the user, select the Lock Out checkbox.
6. Click Save.
7. Operation Done page appears, with message: "User operation ended successfully."
8. Click ok.
The Users console opens, displaying all the users in the system.
If the user has been locked, the Unlock link appears on the user row.
Removing a User from XpoLogFrom the Users console, Administrators can delete from the system an XpoLog user.
Note: You cannot remove an Administrator user.
To remove a user from XpoLog:
1. In the XpoLog Manager menu, click Security >Users.
The Users console opens.
2. Under Group Users, click delete on the row of the user that you want to delete.
The Operation Verification page appears, asking you to confirm the user removal.
3. Click ok.
The Operation Done page appears, with the message: "User removal ended successfully."
4. Click ok.
The Users console appears, with the deleted user removed from the list.
Unlocking a User
Users who have unsuccessfully logged in to XpoLog five times, are automatically locked. Also, administrators can choose to lock users (See Modi
fying/Viewing XpoLog User Settings.)
In the Security > Users console, locked users have an Unlock link on their row. Administrators can unlock users who have been locked.
To unlock a user:
1. In the XpoLog Manager menu, click Security > Users.
The Users console opens.
2. Under Group Users, click Unlock on the row of the User that you want to unlock.
Operation Done page appears, with message: "User unlock ended successfully."
3. Click ok.
The Users console opens, displaying all the users in the system. The Unlock link no longer appears on the user row.
Managing Groups
From the Security > Groups console, Administrators can:
View a listing of all groups in XpoLog, and filter the list to display a specific group.
Create a group.
Modify or view a group''s settings.
Remove a group from XpoLog.
Groups can be composed of users and/or other groups defined in the system.
An XpoLog user must be associated with (i.e. belong to) at least one group. This way, permissions can be assigned to users at a
group level, as opposed to at a user level.
Note: An XpoLog system has at a minimum an Administrators group and an All group.
Viewing and Filtering XpoLog Groups
In the Security > Groups console, Administrators can view a listing of all groups defined in XpoLog.
The Groups list can be filtered to display a specific group. This is especially useful when there are many groups defined in the system, and you
want to quickly find the group so that you can view or modify its settings, or delete it.
Viewing XpoLog Groups
To view groups defined in XpoLog:
In XpoLog Manager, click Security > Groups.
The Groups console opens, listing all the groups that are defined in XpoLog.
Filtering Groups Display
To filter the groups display:
In the Groups console, select from the Groups drop-down list, a group.
The selected group is displayed under Group Members.
Creating a New Group
From the Security > Groups console, Administrators can create new groups in XpoLog.
To create an XpoLog group:
1. In the Groups console, click Add Group.
The Group Settings console opens.
2.2. In Group name, type a name to identify the new group.
3. In Display Name, type the name that is to be displayed across the system.
4. In Description, type a description of the new group; optional.
5. Under Groups List, in Available Groups, select a group to associate with this group, and click Add.
The selected group is moved to the Associated Groups list.
6. Repeat step 5 for all groups to associate with the new group.
Note: You can disassociate a group from the new group, by selecting it in the Associated Groups list, and clicking Remove to return it
to the Available Groups list.
7. Under Administered Groups List, in Available Groups, select the group that this group is to administer, and click Add.
The selected group is moved to the Administered Groups list.
8. Repeat step 7 for all groups that this new group is to administer.
Note: You can remove an administered group from the new group, by selecting it in the Administered Groups list, and clicking Remove
to return it to the Available Groups list.
9. Under Group Members, in Available Members, select a group or user that is to be a member of the new group, and click Add.
The selected group or user is moved to the Selected Members list.
10. Repeat step 9 for all members to add to this new Group.
Note: You can remove a member from the group, by selecting it in the Selected Members list, and clicking Remove to return it to the Av
ailable Members list.
11. Under Policy Settings, select one of the following:
Use the policy of the selected groups
Use the following Policy, and in the adjacent drop-down box, select the Policy to use for the new group.
12. Click Save.
The Operation done page appears, with the message:"Group operation ended successfully".
13. Click ok.
The Groups console opens, with the newly added group under the Groups members list.
Modifying/Viewing Group Settings
From the Security > Groups console, Administrators can view or modify a group''s settings.
The settings of all groups, with the exception of the All and Administrators groups, can be modified. These groups
can only be viewed.
Viewing Group Settings
From the Security > Groups console, Administrators can view the settings of any group.
To view group settings:
1. In the XpoLog Manager menu, click Security >Groups.
The Groups console opens.
2. In the Groups console, optionally filter the Groups display, by selecting in Groups, a group to view.
The selected group is displayed on the console.
3. To view the settings of the All or Administrators group, click the View link adjacent to the group.
To view the settings of any other group, click the edit link adjacent to the group.
The Group Settings console opens, displaying the settings of the selected group.
4. Click Cancel.
The Groups console opens, displaying all the groups in the system.
Modifying Group Settings
From the Groups console, Administrators can view or modify the settings of an XpoLog group, with the exception of the Group Name.
The modifiable settings include:
Display Name of the group
Description of the group
Groups to which the group is associated
Groups that this group administers
Group members
Group administrators
Policy used by the group
Note: You cannot modify the settings of the All and Administrators groups.
To modify a Group''s settings:
1. In the XpoLog Manager menu, click Security >Groups.
The Groups console opens.
2. In the Groups console, optionally filter the Groups display, by selecting in Groups, a group.
The selected group is displayed on the console.
3. Under Groups Members, click edit on the row of the group whose settings you want to modify.
The Group Settings console opens.
4. Modify the Display Name, Description, and the Associated Groups, Administered Groups, Group Members, Group4.
Administrators, and Policy Settings, as required. See Creating a New Group for details.
5. Click Save.
Operation Done page appears, with message: "Group operation ended successfully."
6. Click ok.
The Groups console opens, displaying all the groups in the system.
Removing a Group from XpoLog
From the Groups console, Administrators can delete from the system, an XpoLog group that is no longer relevant, provided that it is not being
used.
Note: You cannot remove the Administrators or All group.
To remove a Group from XpoLog:
1. In the XpoLog Manager menu, click Security > Groups.
The Groups console opens.
2. In the Groups console, optionally filter the Groups display, by selecting in Groups, a group.
The selected group is displayed on the console.
3. Under Groups Members, click delete on the row of the group that you want to delete.
The Operation Verification page appears, asking you to confirm the group removal.
4. Click ok.
The Operation Done page appears, with the message: "Groups removal ended successfully."
5. Click ok.
The Groups console opens, with all groups, but not the deleted group, displayed on the list.
Managing Policies
Each user and group defined in XpoLog is assigned a policy that defines the permissions that the user/group has in XpoLog, i.e. a set of activities
(functionalities) that users of the policy can perform in the system.
A policy can be set to grant permission to perform all actions in the system, or specific actions. A policy can also give View Only permission, and
not permit performing any actions in XpoLog.
From the Security > Policies console, Administrators can:
View a listing of all policies in XpoLog.
Create a policy.
Modify or view a policy''s settings.
Remove a policy from XpoLog.
Although a policy defines the permissions that a user has in the system, XpoLog enables the Administrator to customize permissions for individual
folders and logs in XpoLog (see Apply Permissions on a Folder or Log).
Viewing Policies
In the Security > Policies console, Administrators can view a listing of all policies defined in XpoLog.
To view policies defined in XpoLog:
In XpoLog Manager, click Security > Policies.
The Policies console opens, listing all the policies that are defined in XpoLog.
Creating a New Policy
From the Security > Policies console, Administrators can create new policies in XpoLog.
A policy defines the actions that a user or group that is assigned this policy, can perform. All actions of a specific component (categories) (such as
Analytics, Monitors, Search, and more), or specific actions can be permitted under a specific policy. Selecting a parent component automatically
permits all the components underneath it.
Defining a policy without selecting any permissions defines a View Only permission for folders and logs, and no
permission for all other components in XpoLog.
To create a policy:
1. In the Policies console, click Add Policy.
The Policy Settings console opens.
2. In Policy name, type a name to identify the new policy.
3. In Display Name, type the name that is to be displayed across the system.
4. In Description, type a description of the new policy; optional.
5. Under Permissions List, select the checkboxes of components or actions that are permitted to users of this policy
6. Click Save.6.
The Operation done page appears, with the message:"Policies operation ended successfully".
7. Click ok.
The Policies console opens, with the newly added policy under the Policies list.
Modifying/Viewing Policy Settings
From the Security > Policies console, Administrators can view or modify a policy''s settings.
The settings of all policies, with the exception of the Administrations poiicy, can be modified. This policy can only be viewed.
Viewing Policy Settings
From the Security > Policies console, Administrators can view the settings of any policy.
To view policy settings:
1. In the XpoLog Manager menu, click Security > Policies.
The Policies console opens.
2. To view the settings of the Administrations policy, click the View link adjacent to the policy.
To view the settings of any other group, click the Edit link adjacent to the policy.
The Policy Settings console opens, displaying the settings of the selected policy.
3. Click Cancel.
The Policies console opens, displaying all the policies in the system.
Modifying Policy Settings
From the Policies console, Administrators can view or modify the settings of an XpoLog policy, with the exception of the Policy name.
The modifiable settings include:
Display Name of the policy
Description of the policy
Actions with permissions
Note: You cannot modify the settings of the Administrations policy.
To modify a Policy''s settings:
1. In the XpoLog Manager menu, click Security >Policies.
The Policies console opens.
2. Under Poicies, click edit on the row of the group whose settings you want to modify.
The Policy Settings console opens.
3. Modify the Display Name, Description, and the selected actions in the Permissions list. See Creating a New Policy for details.
4. Click Save.
Operation Done page appears, with message: "Policies operation ended successfully."
5. Click ok.
The Policies console opens, displaying all the policies in the system.
Removing a Policy from XpoLog
From the Policies console, Administrators can delete from the system, an XpoLog policy that is no longer being used.
Note: You cannot remove the Administrations policy.
To remove a Policy from XpoLog:
1. In the XpoLog Manager menu, click Security > Policies.
The Policiess console opens.
2. Under Policies, click delete on the row of the policy that you want to delete.
The Operation Verification page appears, asking you to confirm the policy removal.
3. Click ok.
The Operation Done page appears, with the message: "Policy removal ended successfully."
4. Click ok.
The Policies console opens, with all policies, but not the deleted policy, displayed on the list.
Importing/Exporting a Log
From the Tools menu in XpoLog Manager, you can export a log from XpoLog or import a log into XpoLog.
Exporting a Log
From the Tools > Export Log menu item, you can export any log that is displayed in the XpoLog Log Viewer, together with its configuration or only
its data. For a complete explanation, see Exporting a Log in the User Guide.Importing a Log
From the Tools > Import Log menu item in XpoLog Manager, you can import into XpoLog any log that has previously been exported from XpoLog
(your system or another system) together with its configuration.
Note: It is only possible to import logs that were created with XpoLog version 2.2 and later.
To import a log into XpoLog:
1. In the XpoLog Manager menu, select Tools > Import Log.
The Import Log page is displayed.
2. In Path, type or browse to the XpoLog archive file location of the zipped log file to import.
3. Click the Next button.
4. In the Parent Folder Selection page that appears, in Parent Folder, click the select link to select a parent folder for this log from the list
of available folders, or create a new folder under which to place the log.
5. Click the Next button.
The log is imported into the selected location, and a message notifies you of such.
6. Click OK.
The Log Viewer opens, displaying the imported log records.
XpoLog API / SDK
XpoLog API provides a URL-based querying of XpoLog to retrieve information from XpoLog.
XpoLog SDK provides a set of commands that enables remote configuration of different XpoLog properties without accessing the GUI.
XpoLog API
General
XpoLog exposes a URL based API to the users. The API exposes a set of HTTP/S calls that can be used to retrieve information from XpoLog:
URL that returns events from XpoLog Search in XML/CSV format
URL that returns a Dashboards latest result in PDF format
URL to open Search console on an executed search query
URL to open a specific log in the Log Viewer
URL to enter XpoLog under specific Folder(s) context
URL to enter XpoLog under specific AppTag(s) context
URL that returns Collected Data Information
Security
In case security is activated in XpoLog (login is required) then using the URL based API will require passing user''s credentials in order to login
into the system prior to executing the API command.
It is required to pass the username and password in the URL to XpoLog in order to get the command executed.
Add to each link at the end:
&autoLogin=true&username=[USER_NAME]&password=[PASSWORD]
[USER_NAME] = the user name which the API will use to login
[PASSWORD] = the password of the user name
URL that returns events from XpoLog Search in XML/CSV format
1.1. http://[MACHINE_NAME]:[XPOLOG_PORT]/logeye/view/api/widgetAPI.jsp?widgetId=searchAPI&searchQuery=[see item 2]&fixedInterva
l=[see item 3]&startTimeFullStr=[see item 4]&endTimeFullStr=[see item 4]&maxNumOfRecords=[see item 5]&resultFormat=[see item
6]&paginate=[see item 7]&maxRecordsPerPage=[see item 8]
2. searchQuery=a query as used in XpoSearch console
3. fixedInterval=optional values are: last15Minutes, last30Minutes, last60Minutes, last3Hours, last12Hours, last24Hours, last7Days,
last14Days, last1Months, last6Months, currentDay, previousDay, currentWeek, previousWeek, allData
IMPORTANT: when using fixedInterval only values from the above list can be provided as is. Optional; if fixedInterval is used then
startTimeFullStr and endTimeFullStr should not be used.
4. startTimeFullStr=the start time full string, formatted according to the XpoLog system format; default format is MM/dd/yyyy HH:mm:ss
endTimeFullStr=the end time full string, formatted according to the XpoLog system format; default format is MM/dd/yyyy HH:mm:ss
Optional; used only if fixedInterval is missing; if used, both values are mandatory.
5. maxNumOfRecords=the maximal number of records to return in the result; if missing, the default value is
taken from the widgets.searchAPI.maxNumOfRecords XpoLog system property (default is 100 but may be
modified)
6. resultFormat=the format in which the result will be returned. Optional values xml/csv (case sensitive)
7. paginate=activate pagination; optional values true/false
8. maxRecordsPerPage=the maximal number of results per page; If missing, the default value is 100 records per page.
The following is the XML structure of the API execution response:
[URL_TO_RESULT_FILE]
Tag Path Description
APIResult General document root tag
Status The state of the API execution. Optional values OK/Fail
Url Exists only when the status is OK.
The URL to a file containing the API execution result.
Message Exists only when the status is Fail.
Data Contains the API execution result.
Note: When the result format is CSV, the content of the tag will be wrapped with CDATA.
Note: If the execution result is larger than the system-configured limit, the tag will not contain the execution result. Instead,
the content of the tag should be used to access the execution result file.
Examples
(You need to change machine/port/logs names, ids / folder names, ids / application names, ids if used to be relevant to your
environment)
Returns records matching the search query “error or fail* in logs that their names start with “log4j” in the specified time frame. Result is
limited to maximum 1000 log records in csv format:
http://localhost:30303/logeye/view/api/widgetAPI.jsp?widgetId=searchAPI&searchQuery=error or fail* in log.log4j*&startTimeFullStr=01/01/2014
00:00:00&endTimeFullStr=02/02/2014 00:00:00&maxNumOfRecords=1000&resultFormat=csv
- Returns records matching the search query “error or exception in in all logs in the last 7 days’ time frame. Result is limited to maximum 1000 log
records in xml format:
http://localhost:30303/logeye/view/api/widgetAPI.jsp?widgetId=searchAPI&searchQuery=error or
exception&fixedInterval=last7Days&maxNumOfRecords=1000&resultFormat=xml
- Returns the complex search query ''* in app.Windows Event Logs | count | group by event'' result in the last 7 days’ time frame. Result is limitedto maximum 1000 entries in csv format. In this example the URL also contains a username and password (admin/admin) that will perform a login
to XpoLog in order to be able to execute the search query:
http://localhost:30303/logeye/view/api/widgetAPI.jsp?widgetId=searchAPI&searchQuery=*%20in%20app.Windows%20Event%20Logs%20|%20c
ount%20|%20group%20by%20event&fixedInterval=last7Days&maxNumOfRecords=1000&resultFormat=csv&autoLogin=true&username=admin
&password=admin
URL that returns a Dashboards latest result in PDF format
1. http://[MACHINE_NAME]:[XPOLOG_PORT]/logeye/view/api/dashboardAPI.jsp?action=export&appName=[see item 2]&viewName=[see
item 3]
2. appName=a name of an existing App in XpoLog (case insensitive).
3. viewName=a name of an existing Dashboard in XpoLog (case insensitive).
Examples
(You need to change machine/port/logs names, ids / folder names, ids / application names, ids if used to be relevant to your
environment)
- Returns the Dashboard “Dashboard-1” in PDF format
http://localhost:30303/logeye/view/api/dashboardAPI.jsp?action=export&appName=App-1&viewName=Dashboard-1
URL to open Search console on an executed search query
1. http://[MACHINE_NAME]:[XPOLOG_PORT]/logeye/search/view/loadSearchModel.jsp?searchQuery=[see item 2]&fixedInterval=[see
item 3]&startTimeFullStr=[see item 4]&endTimeFullStr=[see item 4]
2. searchQuery=a query as used in XpoSearch console
3. fixedInterval=optional values are: last15Minutes, last30Minutes, last60Minutes, last3Hours, last12Hours, last24Hours, last7Days,
last14Days, last1Months, last6Months, currentDay, previousDay, currentWeek, previousWeek, allData
IMPORTANT: when using fixedInterval only values from the above list can be provided as is. Optional; if fixedInterval is used then
startTimeFullStr and endTimeFullStr should not be used.
4. startTimeFullStr=the start time full string, formatted according to the XpoLog system format; default format is MM/dd/yyyy HH:mm:ss
endTimeFullStr=the end time full string, formatted according to the XpoLog system format; default format is
MM/dd/yyyy HH:mm:ss
Optional; used only if fixedInterval is missing; if used, both values are mandatory.
Examples
(You need to change machine/port/logs names, ids / folder names, ids / application names, ids if used to be relevant to your
environment)
- Presents Search console after execution of the search query “error” in the specified time frame:
http://localhost:30303/logeye/search/view/loadSearchModel.jsp?searchQuery=error&startTimeFullStr=01/01/2014
00:00:00&endTimeFullStr=02/02/2014 00:00:00
- Presents Search console after execution of the search query “error” in the specified time frame (last 7 days):
http://localhost:30303/logeye/search/view/loadSearchModel.jsp?searchQuery=error&fixedInterval=last7Days
URL to open a specific log in the Log Viewer
1. http://[MACHINE_NAME]:[XPOLOG_PORT]/logeye/view/api/logViewAPI.jsp?logId=[see in item 2]&searchQuery=[see item 3]&logFilter
Name=[see item 4]&filterName=[see item 5]&filterId=[see item 6]&startTimeFullStr=[see item 7]&endTimeFullStr=[see item 8]&opdire
ct=[see item 9]&expandLogViewer=[see item 10]
2.2. logId=one or more log ids, separated by comma (mandatory parameter)
3. searchQuery=a term to filter the log by (similar syntax to XpoSearch)
4. logFilterName=the name to be given to the filter defined in item c; if missing, the name of the filter will be
‘external filter’
5. filterName=the name of an existing filter that should be activate on the log; used only if the searchQuery
parameter is missing
6. filterId=the id of an existing filter that should be activate on the log; used only if the searchQuery parameter is
missing or if the filterName parameter is missing or does not match an existing filter’s name
7. startTimeFullStr=the start time full string, formatted according to the XpoLog system format; default format is
MM/dd/yyyy HH:mm:ss
8. endTimeFullStr=the end time full string, formatted according to the XpoLog system format; default format is
MM/dd/yyyy HH:mm:ss
9. opdirect=start will display the first records of the result; last will display the last records. Default is start.
10. expandLogViewer=true will present the log in the viewer without the folder and logs and the menu presented. Default is false.
* Contact XpoLog Support to find out how to retrieve applications/folders/logs/filters IDs.
Examples
(You need to change machine/port/logs names, ids / folder names, ids / application names, ids if used to be relevant to your
environment)
- Enter the viewer on the log Application_1235558747694 filtered on records that contain ‘error or information’ (the name of the temporary filter is
set to be testQuery):
http://localhost:30303/logeye/view/api/logViewAPI.jsp?logId=Application_1235558747694&searchQuery=error or
information&logFilterName=testQuery
- Enter the viewer on the log Application_1235558747694 with an existing filter named ‘error’ activated (the last matching events will be
presented):
http://localhost:30303/logeye/view/api/logViewAPI.jsp?logId=Application_1235558747694&filterName=error&opdirect=last
- Enter the viewer on a merge of the logs Application_1235558747694 and Security_1235558747851 filtered on
records in the specified time frame (the name of the filter is set to be testTimeFilter):
http://localhost:30303/logeye/view/api/logViewAPI.jsp?logId=Application_1235558747694,Security_1235558747851&logFilterName=testTimeFilt
er&startTimeFullStr=01/01/2014 00:00:00&endTimeFullStr=02/01/2014 00:00:00
- Enter the viewer on the log Application_1235558747694 with the filter ‘error’ activated but also filtered on records in the specified time frame (the
name of the temporary filter is set to be testFilterAndTime and the last matching events will be presented). Log viewer will be expanded
(the folders and logs tree and the top menus will not be presented):
http://localhost:30303/logeye/view/api/logViewAPI.jsp?logId=Application_1235558747694&logFilterName=testFilterAndTime&filterName=error&o
pdirect=last&startTimeFullStr=05/01/2012 00:00:00&endTimeFullStr=05/03/2012 00:00:00&expandLogViewer=true
URL to enter XpoLog under specific Folder(s) context
1. http://[MACHINE_NAME]:[XPOLOG_PORT]/logeye/componentAction.jsp?selectedCompId=XpoLog&forward=root.jsp?folderId=[see item
2]&mainPage=view/mainView.jsp
2. folderId=a comma separated list of folder ids (mandatory parameter)
* Contact XpoLog Support to find out how to retrieve applications/folders/logs/filters IDs.
Examples
(You need to change machine/port/logs names, ids / folder names, ids / application names, ids if used to be relevant to your
environment)
- Enter into XpoLog directly to view the folder with ID 1271697039748 under the Folders and Logs tree:http://localhost:30303/logeye/componentAction.jsp?selectedCompId=XpoLog&forward=root.jsp?folderId=1271697039748&mainPage=view/main
View.jsp
- Enter into XpoLog directly to view the folders with IDs 1271697039748 and 1227520005721 under the Folders and
Logs tree:
http://localhost:30303/logeye/componentAction.jsp?selectedCompId=XpoLog&forward=root.jsp?folderId=1271697039748,1227520005721&main
Page=view/mainView.jsp
URL to enter XpoLog under specific AppTag(s) context
1. http://[MACHINE_NAME]:[XPOLOG_PORT]/logeye/componentAction.jsp?selectedCompId=XpoLog&forward=root.jsp?applicationId=[see
item 2]&mainPage=view/mainView.jsp
2. applicationId=a comma separated list of AppTags ids (if an empty application id is provided, it will reset any AppTag context).
* Contact XpoLog Support to find out how to retrieve AppTags/Folders/Logs/filters IDs.
Examples
(You need to change machine/port/logs names, ids / folder names, ids / application names, ids if used to be relevant to your
environment)
- Enter into XpoLog with under the AppTag app1 context:
http://localhost:30303/logeye/componentAction.jsp?selectedCompId=XpoLog&forward=root.jsp?applicationId=app1&mainPage=view/mainView.j
sp
- Enter into XpoLog with under the AppTags app1,app2 context:
http://localhost:30303/logeye/componentAction.jsp?selectedCompId=XpoLog&forward=root.jsp?applicationId=app1,app2&mainPage=view/main
View.jsp
URL that returns Collected Data Information
1. http://[MACHINE_NAME]:[XPOLOG_PORT]/logeye/message/messageJsonApi.jsp?api=collectedDataInfo&type=[see item 2]&timeFram
e=[see item 3]&detailsLevel=[see item 4]&maxNumberOfResults=[see item 5]
2. type = fixed string values: AppTags / Folders (default = Folders)
3. timeFrame = fixed string values: last15Minutes, last30Minutes, last60Minutes, last3Hours, last12Hours, last24Hours, last7Days,
last14Days, last1Months, last6Months, currentDay, previousDay, currentWeek, previousWeek, allData (default = last24Hours)
4. detailsLevel = fixed string values: Basic, Detailed
a. Basic – returns a JSON specifying: AppTag/Folder based on specified type with AppTag/Folder name, number of defined log and
number of collected logs.
Examples:
http://localhost:30303/logeye/message/messageJsonApi.jsp?api=collectedDataInfo&type=Folders&timeFrame=last24Hours&de
tailsLevel=Basic&autoLogin=true&username=admin&password=admin
Result JSON:
{"data":{"collectionData":[{"totalLogs":12,"Folders":"XpoLog System Logs","collectedLogs":5},{"totalLogs":1,"Folders":"Example
Applications,WebLogic 10.0,xplg","collectedLogs":0},{"totalLogs":7,"Folders":"Example
Logs","collectedLogs":0},{"totalLogs":5,"Folders":"Linux
OS","collectedLogs":4},{"totalLogs":9,"Folders":"Demo,Tomcat,TX_EXAMPLE","collectedLogs":0},{"totalLogs":5,"Folders":"XpoL
og EC2,jet.xpolog.com,Linux
OS","collectedLogs":4},{"totalLogs":4,"Folders":"ID,Tomcat","collectedLogs":3},{"totalLogs":9,"Folders":"WebSphere,Profiles,Ser
ver","collectedLogs":0},{"totalLogs":1,"Folders":"Example Applications,WebSphere4.
a.
6.1.0.0","collectedLogs":0},{"totalLogs":1,"Folders":"CloudXpoLog","collectedLogs":0}]}}
http://localhost:30303/logeye/message/messageJsonApi.jsp?api=collectedDataInfo&type=AppTags&timeFrame=last24Hours&
detailsLevel=Basic&autoLogin=true&username=admin&password=admin
Result JSON:
{"data":{"collectionData":[{"totalLogs":1,"collectedLogs":0,"AppTags":"Tomcat
5.0.28"},{"totalLogs":22,"collectedLogs":0,"AppTags":"PrudentialA"},{"totalLogs":1,"collectedLo
gs":0,"AppTags":"XplgWiki"},
{"totalLogs":4,"collectedLogs":3,"AppTags":"Tomcat"},{"totalLogs":1,"collectedLogs":0,"AppTags"
:"LogLooud"},{"totalLogs":11,"collectedLogs":0,"AppTags":"DASTLab"},{"totalLogs":10,"collectedL
ogs":0,"AppTags":"Example AppTag"},
{"totalLogs":9,"collectedLogs":0,"AppTags":"XpoLog"},{"totalLogs":2,"collectedLogs":1,"AppTags"
:"Website"},{"totalLogs":1,"collectedLogs":0,"AppTags":"LabA_LoadTest"},
{"totalLogs":1,"collectedLogs":0,"AppTags":"Audit"},{"totalLogs":24,"collectedLogs":0,"AppTags"
:"AppFinTech_LoadLabA"},{"totalLogs":19,"collectedLogs":8,"AppTags":"Linux"},{"totalLogs":5,"co
llectedLogs":4,"AppTags":"Linux OS"},
{"totalLogs":7,"collectedLogs":0,"AppTags":"Weblogic
10.0"},{"totalLogs":10,"collectedLogs":0,"AppTags":"VOLoadTesting"},{"totalLogs":2,"collectedLo
gs":0,"AppTags":"Log4J"},
{"totalLogs":1,"collectedLogs":0,"AppTags":"JET-XPLG"},{"totalLogs":9,"collectedLogs":0,"AppTag
s":"WebSphere 6.1"},{"totalLogs":21,"collectedLogs":11,"AppTags":"ID"}]}}
b. Detailed – returns a JSON specifying: AppTag/Folder based on specified type, number of defined logs, number of collected logs
and a list of uncollected logs with their sizes in bytes (comma separated full Folders path in XpoLog Folders and Logs tree).
Examples:
http://localhost:30303/logeye/message/messageJsonApi.jsp?api=collectedDataInfo&type=Folders&timeFrame=last24Hours&de
tailsLevel=Detailed&maxNumberOfResults=10&autoLogin=true&username=admin&password=admin
Result JSON:
{"data":{"collectionData":[{"totalLogs":1,"Folders":"JS.Logloud","collectedLogs":0,"unCollectedLogsData":[{"path":"JS.Logloud,JS
.Logloud","dataSize":250}]},{"totalLogs":1,"Folders":"AWS ELB","collectedLogs":0,"unCollectedLogsData":[{"path":"AWS
ELB,elasticloadbalancing","dataSize":3161841}]},{"totalLogs":9,"Folders":"Demo,Tomcat,TX_EXAMPLE","collectedLogs":0,"unC
ollectedLogsData":[{"path":"Demo,Tomcat,TX_EXAMPLE,IMPACS_BookingInterface-IMPACS_LoanBooking_ReqRep","dataSiz
e":104619},{"path":"Demo,Tomcat,TX_EXAMPLE,ICV_Customer_Interface--ICV_Customer_Search_Response","dataSize":148
2923},{"path":"Demo,Tomcat,TX_EXAMPLE,IMPACS_BookingInterface-IMPACS_LoanBooking_Status","dataSize":70679},{"pat
h":"Demo,Tomcat,TX_EXAMPLE,NAIT_AFS_BookingInterface--AFS_LP_BookingInterface_ResponseMessage","dataSize":450
876},{"path":"Demo,Tomcat,TX_EXAMPLE,NAIT_AFS_BookingInterface--LP_AFS_BookingInterface_RequestMessage","dataSi
ze":374061},{"path":"Demo,Tomcat,TX_EXAMPLE,ICV_Customer_Interface--ICV_Get_Customer_Request","dataSize":900789}
,{"path":"Demo,Tomcat,TX_EXAMPLE,ICV_Customer_Interface--ICV_Get_Customer_Response","dataSize":865204},{"path":"D
emo,Tomcat,TX_EXAMPLE,NESS-LPNameRequesttoNESS","dataSize":4298295},{"path":"Demo,Tomcat,TX_EXAMPLE,NESS
-NESSNameResponsetoLP","dataSize":6681574}]},{"totalLogs":4,"Folders":"Example Applications,WebSphere
6.1.0.0,AppSrv01,SERVERWINNode01,server1","collectedLogs":0,"unCollectedLogsData":[{"path":"Example
Applications,WebSphere 6.1.0.0,AppSrv01,SERVERWINNode01,server1,http_error","dataSize":6201816},{"path":"Example
Applications,WebSphere 6.1.0.0,AppSrv01,SERVERWINNode01,server1,http_access","dataSize":1411665},{"path":"Example
Applications,WebSphere 6.1.0.0,AppSrv01,SERVERWINNode01,server1,SystemErr","dataSize":178297},{"path":"Example
Applications,WebSphere
6.1.0.0,AppSrv01,SERVERWINNode01,server1,SystemOut","dataSize":663867}]},{"totalLogs":2,"Folders":"Demo,MySQL","colle
ctedLogs":0,"unCollectedLogsData":[{"path":"Demo,MySQL,mysqld-instance-1","dataSize":184975},{"path":"Demo,MySQL,mysq
ld-instance-2","dataSize":8235}]},{"totalLogs":2,"Folders":"Example Applications,WebLogic
10.0,wl_server,examplesServer","collectedLogs":0,"unCollectedLogsData":[{"path":"Example Applications,WebLogic
10.0,wl_server,examplesServer,examplesServer","dataSize":1003895},{"path":"Example Applications,WebLogic
10.0,wl_server,examplesServer,access","dataSize":698}]},{"totalLogs":1,"Folders":"WebApp
Logs,LogLooud","collectedLogs":0,"unCollectedLogsData":[{"path":"WebApp
Logs,LogLooud,LogLooud","dataSize":1305}]},{"totalLogs":2,"Folders":"Demo,Linux,instance1","collectedLogs":0,"unCollectedLo
gsData":[{"path":"Demo,Linux,instance1,messages","dataSize":86974743},{"path":"Demo,Linux,instance1,Mail","dataSize":2382
1324}]}]}}
http://localhost:30303/logeye/message/messageJsonApi.jsp?api=collectedDataInfo&type=AppTags&timeFrame=last24Hours&
detailsLevel=Detailed&maxNumberOfResults=10&autoLogin=true&username=admin&password=admin
Result JSON:
{"data":{"collectionData":[{"totalLogs":1,"collectedLogs":0,"unCollectedLogsData":[{"path":"Example Applications,Tomcat
5.0.28,localhost_log","dataSize":103891}],"AppTags":"Tomcat 5.0.28"},
{"totalLogs":1,"collectedLogs":0,"unCollectedLogsData":[{"path":"JS.Logloud,JS.Logloud","dataSize":250}],"AppTags":"JS.Loglo
ud"},
{"totalLogs":9,"collectedLogs":0,"unCollectedLogsData":[{"path":"Demo,Tomcat,TX_EXAMPLE,IMPACS_BookingInterface-IMPA
CS_LoanBooking_ReqRep","dataSize":104619},
{"path":"Demo,Tomcat,TX_EXAMPLE,ICV_Customer_Interface--ICV_Customer_Search_Response","dataSize":1482923},{"path4.
b.
":"Demo,Tomcat,TX_EXAMPLE,IMPACS_BookingInterface-IMPACS_LoanBooking_Status","dataSize":70679},
{"path":"Demo,Tomcat,TX_EXAMPLE,NAIT_AFS_BookingInterface--AFS_LP_BookingInterface_ResponseMessage","dataSize"
:450876},
{"path":"Demo,Tomcat,TX_EXAMPLE,NAIT_AFS_BookingInterface--LP_AFS_BookingInterface_RequestMessage","dataSize":3
74061},
{"path":"Demo,Tomcat,TX_EXAMPLE,ICV_Customer_Interface--ICV_Get_Customer_Request","dataSize":900789},{"path":"De
mo,Tomcat,TX_EXAMPLE,ICV_Customer_Interface--ICV_Get_Customer_Response","dataSize":865204},
{"path":"Demo,Tomcat,TX_EXAMPLE,NESS-LPNameRequesttoNESS","dataSize":4298295},{"path":"Demo,Tomcat,TX_EXAM
PLE,NESS-NESSNameResponsetoLP","dataSize":6681574}],"AppTags":"AppFinTech_LoadLabA"},
{"totalLogs":14,"collectedLogs":5,"unCollectedLogsData":[{"path":"Demo,Linux,instance1,messages","dataSize":86974743},{"pat
h":"Demo,Linux,instance1,Mail","dataSize":23821324},{"path":"ID,OS,log,boot","dataSize":0},
{"path":"ID,OS,log,dracut","dataSize":0},{"path":"ID,OS,log,yum","dataSize":0},{"path":"ID,OS,log,tomcat,catalina
log","dataSize":0},{"path":"ID,OS,log,tomcat,catalina out","dataSize":0},
{"path":"ID,OS,log,tomcat,localhost","dataSize":0},{"path":"ID,OS,log,tomcat,localhost_access_log","dataSize":0}],"AppTags":"Li
nux"},
{"totalLogs":1,"collectedLogs":0,"unCollectedLogsData":[{"path":"CloudXpoLog,log,apt,history.log","dataSize":0}],"AppTags":"DA
STLab"},
{"totalLogs":3,"collectedLogs":0,"unCollectedLogsData":[{"path":"Example Applications,WebLogic
10.0,wl_server,examplesServer,examplesServer","dataSize":1003895},
{"path":"Example Applications,WebLogic 10.0,wl_server,examplesServer,access","dataSize":698},{"path":"Example
Applications,WebLogic 10.0,xplg,xplg","dataSize":8033}],"AppTags":"Weblogic 10.0"},
{"totalLogs":1,"collectedLogs":0,"unCollectedLogsData":[{"path":"CloudXpoLog,log,apt,history.log","dataSize":0}],"AppTags":"VO
LoadTesting"},
{"totalLogs":9,"collectedLogs":0,"unCollectedLogsData":[{"path":"Example Applications,WebSphere
6.1.0.0,AppSrv01,SERVERWINNode01,server1,http_error","dataSize":6201816},
{"path":"Example Applications,WebSphere
6.1.0.0,AppSrv01,SERVERWINNode01,server1,http_access","dataSize":1411665},{"path":"Example Applications,WebSphere
6.1.0.0,AppSrv01,SERVERWINNode01,server1,SystemErr","dataSize":178297},
{"path":"Example Applications,WebSphere
6.1.0.0,AppSrv01,SERVERWINNode01,server1,SystemOut","dataSize":663867},{"path":"Example Applications,WebSphere
6.1.0.0,WebSphere Merge Log Example","dataSize":0},
{"path":"Example Applications,WebSphere 6.1.0.0,AppSrv02,SERVERWINNode02,server1,http_error","dataSize":3100908},
{"path":"Example Applications,WebSphere 6.1.0.0,AppSrv02,SERVERWINNode02,server1,http_access","dataSize":1411665},
{"path":"Example Applications,WebSphere 6.1.0.0,AppSrv02,SERVERWINNode02,server1,SystemErr","dataSize":89095},
{"path":"Example Applications,WebSphere
6.1.0.0,AppSrv02,SERVERWINNode02,server1,SystemOut","dataSize":67255}],"AppTags":"WebSphere 6.1"},
{"totalLogs":12,"collectedLogs":5,"unCollectedLogsData":[{"path":"ID,OS,log,boot","dataSize":0},{"path":"ID,OS,log,dracut","data
Size":0},{"path":"ID,OS,log,yum","dataSize":0},
{"path":"ID,OS,log,tomcat,catalina log","dataSize":0},{"path":"ID,OS,log,tomcat,catalina
out","dataSize":0},{"path":"ID,OS,log,tomcat,localhost","dataSize":0},
{"path":"ID,OS,log,tomcat,localhost_access_log","dataSize":0}],"AppTags":"ID"}]}}
5. maxNumberOfResults = number, max number of results to return. Relevant only when ‘Detailed’ type is specified (default = 100)
XpoLog SDK
General
XpoLog SDK provides a set of commands that enables remote configuration of different XpoLog properties without accessing the GUI.
The following commands are available.
Connection
Settings
Ports (HTTP, SSL, SHUTDOWN and AJP)
Enable/Disable Security
Enable/Disable Agent Mode
Security
Add/Edit user
Accounts Management
Add/Edit/Remove/Enable/Disable remote XpoLog account
Add/Edit/Remove/Enable/Disable SSH account
Add/Edit/Remove/Enable/Disable AWS S3 account
Tasks ManagementAdd/Edit/Execute Add Logs Directory task
Add/Edit/Execute LogSync task
Folders and Logs Management
Add Log
Add Folder
Remove existing folder/log
AppTags
Apply a Time Zone for AppTags
Restart
Apply License
Apply Patch
Requirements
XpoLog Client Jar (download here)
XpoLog keystore file - mandatory when using HTTPS, extract the zip at the same directory of the xpologClient.jar file and ensure a file
.keystore exists in the location after extraction (download here)
JAVA on client machine that executes the commands
Connectivity (HTTP/S) between the client machine that executes the commands to the XpoLog server
Syntax
Connection
In order to execute remote commands it is first required to provide connection parameters to the XpoLog instance:
Connection Parameters
Key Description Values
xpologURL The URL to the XpoLog instance URL Mandatory
user Authentication user name Text Optional (Mandatory if security is enabled)
password Authentication password Text Optional (Mandatory if security is enabled)
Example of base command to connect to a remote XpoLog:
java -cp xpologClient.jar com.xpolog.sdk.client.XpoLogSDKClient -xpologURL http://:/logeye -user USER_NAM
E -password PASSWORD
When using scripts, it is recommended to set the above as parameters, as they should be used on any command which is being executed:
Windows:
set JAVA_RUN=java -cp xpologClient.jar com.xpolog.sdk.client.XpoLogSDKClient
set XPOLOG_CONN=-xpologURL http://:/logeye -user USER_NAME -password PASSWORD
Linux:
JAVA_RUN=”java -cp xpologClient.jar com.xpolog.sdk.client.XpoLogSDKClient”
XPOLOG_CONN=”-xpologURL http://:/logeye -user USER_NAME -password PASSWORD”
After a connection is established the following command may be executed against the connected XpoLog instance:
General Settings and Security Commands
Settings Parameters
Key Description Values
api The API type to use – must be settings “settings” Mandatory
httpPort The HTTP port XpoLog is listening on Number Optional
sslPort The SSL port XpoLog is listening on Number Optional
shutdownPort The server’s shutdown port Number Optional
ajpPort The server’s ajp port Number Optional
agentMode Enable/Disable agent mode true/false Optional
activateSecurity Enable/Disble security true/false Optional
activateSystemTimeZone Set the XpoLog''s user Time Zone Mode to System (Default) true/false Optional
activateDynamicTimeZone Set the XpoLog''s user Time Zone Mode to Dynamic true/false Optional
activateAppTagsTimeZone Set the XpoLog''s user Time Zone Mode to per AppTags true/false Optional
Example of configuring ports:
%JAVA_RUN% %XPOLOG_CONN% -api settings -httpPort 30304 -sslPort 30444 -ajpPort 8010 -shutdownPort 8096 -agentMode true -activ
ateSecurity true
Example of changing system time zone mode:
%JAVA_RUN% %XPOLOG_CONN% -api settings -activateAppTagsTimeZone true
Security Users Parameters
Key Description Values
api The API type to use – must be securityUsers “securityUsers” Mandatory
name The user name of the user Text Mandatory
userPassword The user password Text Mandatory for new
displayName The display name of the user Text Mandatory for new
override Override an existing user (Default: false) true/false Optional
userPolicy The policy name to associate to this user Text Optional
selectedGroupsList The names of the selected groups to associate with this user Text List (separate by ;) Optional
Example of adding a new user and setting its properties:
%JAVA_RUN% %XPOLOG_CONN% -api securityUsers -name testUser -userPassword testPassword -displayName "TEST USER" -override
true -userPolicy test -selectedGroupsList testgroup;All
Accounts Management
Remove Account
Key Description Values api The API type to use – must be removeAccount “removeAccount” Mandatory
name The name of the account to be removed Text Mandatory
Example of removing an account:
%JAVA_RUN% %XPOLOG_CONN% -api removeAccount -name "ACCOUNT_NAME"
Disable/Enable Account
Key Description Values
api The API type to use – must be enableAccount “enableAccount” Mandatory
name The name of the account to be removed Text Mandatory
enabled Enable/Disable the account true/false Mandatory
Example of disabling an account:
%JAVA_RUN% %XPOLOG_CONN% -api enableAccount -name "ACCOUNT_NAME" -enabled false
Add SSH Account Parameters
Key Description Values
api The API type to use – must be addSSHAccount “addSSHAccount” Mandatory
name The name of the account Text Mandatory
description The description of the account Test Optional
hostName Host Name Text Mandatory for new
conType The connection type (Default: SFTP) SFTP/SCP Optional
port The port to be used in the account (Default: 22) Number Optional
override Override an existing account (Default: false) true/false Optional
enabled Enable/Disable the account true/false Optional
privateKeyPath Full Path to Key Text Optional
username Authentication user name Text Optional
userPassword Authentication password Text Optional
Example for adding an SSH account:
%JAVA_RUN% %XPOLOG_CONN% -api addSSHAccount -name "ACCOUNT_NAME" -hostName MACHINE_IP -conType SFTP -port 22 -ov
erride true -enabled true -privateKeyPath "" -username USER_NAME -userPassword PASSWORD
Add Remote XpoLog Account Parameters
Key Description Values
api The API type to use – must be addRemoteXpoLogAccount “addRemoteXpoLogAccount” Mandatory
name The name of the account Text Mandatory
description The description of the account Test Optional
hostName Host Name Text Mandatory for new
conType The connection type (Default: HTTP) HTTP/HTTPS Optional
override Override an existing account (Default: false) true/false Optional
enabled Enable/Disable the account true/false Optional
isCollected False – Proxy mode, True – Agent Mode (Default: true) true/false Optionalusername Authentication user name Text Optional
userPassword Authentication password Text Optional
Example of adding a remote XpoLog account:
%JAVA_RUN% %XPOLOG_CONN% -api addRemoteXpoLogAccount -name "ACCOUNT_NAME" -hostName MACHINE_IP -conType HTTP -
port 30303 -context logeye -override true -enabled true -isCollected true -username admin -userPassword admin
Add AWS S3 Account Parameters
Key Description Values
api The API type to use – must be addExternalAccount “addExternalAccount” Mandatory
name The name of the account Text Mandatory
description The description of the account Text Optional
externalMediaType The type of the account s3 Mandatory (lowercase only)
ema_custom_accessKey The AWS S3 access key Text Mandatory
ema_custom_secretKey The AWS S3 secret key Text Mandatory
override Override an existing account (Default: false) true/false Optional
enabled Enable/Disable the account true/false Optional
Example for adding an SSH account:
%JAVA_RUN% %XPOLOG_CONN% -api addExternalAccount -externalMediaType "s3" -name "ACCOUNT_NAME" -description "ACCOUNT_
DESCRIPTION" -override true -ema_custom_accessKey "ACCOUNT_ACCESS_KEY" -ema_custom_secretKey "ACCOUNT_SECRET_KEY"
Tasks Management
Execute Task Parameters
Key Description Values
api The API type to use – must be executeTask “executeTask” Mandatory
name The name of the task (case sensitive) Text Mandatory
Example for executing a task:
%JAVA_RUN% %XPOLOG_CONN% -api executeTask -name "TASK_NAME"
Execution of a Logs Directory Task (Scanner)
Key Description Values
api The API type to use – must be executeScanTask “executeScanTask” Mandatory
name The name of the task to be presented in XpoLog logs / Activity console while Text Optional
running (recommended)
id The id of the task - used to avoid re-creation of logs which were already created by Text Optional
a SDK command (recommended)
parentFolderPath Determines under which folder to create/update logs that are created by the SDK ROOT = Top Folder (Folders and Logs) Optional
command (Default: Folders and Logs). Folder is created if does not exist Use ‘->’ in the path between folders
accountName The connectivity account to use if the scan is not local Text Mandatory
for SSH
scanPath The full path to scan (local or on the remote source after connection is established) Text MandatoryScan Parameters The SDK supports all the ''ScanConfiguration'' parameters Text Optional
Use -PARAM_NAME PARAM_VALUE in the command
Example of executing a scan directory operation:
%JAVA_RUN% %XPOLOG_CONN% -api executeScanTask -name "SCANNER_SDK" -id "SCAN12345" -parentFolderPath "ROOT->NEW_PA
RENT_FOLDER" -accountName ACCOUNT_NAME -scanPath "/var/log/" -scanMethod 0 -timeZone GMT -directoriesToInclude "DIR1,DIR2"
-filesToExclude "*.zip,*.gzip,*.tar*"
Add Logs Directory Task (Scanner)
Key Description Values
api The API type to use – must be executeScanTask “addScanTask” Mandatory
name The name of the task to be presented in XpoLog logs / Activity console while Text Optional
running (recommended)
id The id of the task - used to avoid re-creation of logs which were already created by Text Optional
a SDK command (recommended)
parentFolderPath Determines under which folder to create/update logs that are created by the SDK ROOT = Top Folder (Folders and Logs) Optional
command (Default: Folders and Logs). Folder is created if does not exist Use ‘->’ in the path between folders
accountName The connectivity account to use if the scan is not local Text Mandatory
for SSH
scanPath The full path to scan (local or on the remote source after connection is established) Text Mandatory
Scan Parameters The SDK supports all the ''ScanConfiguration'' parameters Text Optional
Use -PARAM_NAME PARAM_VALUE in the command
cron Unix cron expression format cron expression format Optional
override Override an existing task (Default: false) true/false Optional
Example of adding a scan directory task:
%JAVA_RUN% %XPOLOG_CONN% -api addScanTask -name "SCANNER_SDK" -id "SCAN12345" -parentFolderPath "ROOT->NEW_PARE
NT_FOLDER" -accountName ACCOUNT_NAME -scanPath "/var/log/" -scanMethod 0 -timeZone GMT -directoriesToInclude "DIR1,DIR2" -fil
esToExclude "*.zip,*.gzip,*.tar*" -cron "0 * * * * ? *" -assignedCollectionPolicy “Default"
Add Log Sync Task Parameters
Key Description Values
api The API type to use – must be addSyncLogTask “addSyncLogTask” Mandatory
name The name of the task Text Mandatory
configFilePath The absolute path to the LogSync configuration file to be used by this task Text Mandatory
createConfiguration Determines whether to create Folders and Logs configuration from the synched logs true/false Optional
(Default: false)
parentFolderPath The parent folder path of this task’s result. Folder and Logs is the default path. Use ‘->’ in the path between Optional
folders
cron Unix cron expression format cron expression format Optional
assignedNode The name of the XpoLog node to be assigned on this task Text Optional
override Override an existing task (Default: false) true/false Optional
Example of adding a Log Sync task:
%JAVA_RUN% %XPOLOG_CONN% -api addSyncLogTask -name "New Log Sync Test" -configFilePath C:\dev\syncLogsTest.xml -override tr
ue -cron "0/10 * * * * ? *"Folders and Logs Management
Add Log
Key Description Values
api The API type to use – must be addLog “addLog” Mandatory
logName The name of the log to be created Text Mandatory
logId The unique ID of a log in XpoLog that can be modified. In case logId is used, it is mandato Text Optional
ry to use override true, since it is an edit mode of an existing log in XpoLog - all the
paramters that will be passed in this commnand will override and update an existing log.
logPath The full path to the log under the Folders and Log Tree (excluding the log name) ROOT = Top Folder Optional
(Folders and Logs) Use
‘->’ in the path between
folders
filesPath The full path to the files on the source (name pattern may be used) Text Mandatory
collectionPolicy The exact name of the collection policy to be assigned on the log (if doesn''t exist, the Test Optional
command will be ignored)
accountName The name of the account to be used if needed (SSH or Win Account) Text Direct Access -
Optional
Win Network/SSH -
Mandatory
patterns A list of patterns that will be applied on the log that is added (separated by Text Mandatory*
XPLG_SDK_SEP) If a template is used
- optional
appTags A comma separated list of appTags that the added log will be tagged to Text Optional
timezone The timezone definition of the added log Text (a value from JAVA Optional
time zone list)
charset The charset definition of the added log Text (a value from JAVA Optional
charset list)
template The name of the template to be used Text Optional*
specific parameters
that are passed
override template''s
settings
dataFilterQuery The dataFilterQuery to be applied on the specified - see Advanced Log Settings for more Text Optional
information. (pass an empty filter to clear an existing filter)
override Overwrite an existing log configuration (Default: false) true/false Optional
Example for adding a log over SSH using an existing account (for direct access simple remove the -accountName parameter):
%JAVA_RUN% %XPOLOG_CONN% -api addLog -logName "LOG_NAME" -logPath "ROOT->FOLDER_1->FOLDER_2" -filesPath "c:\LogFiles
\messages{string}" -patterns " {date:Date,dd/MM/yyyy HH:mm:ss.SSSSSS} {text:priority} {string:message}XPLG_SDK_SEP{date:Date,dd/MM/yy
yy HH:mm:ss.SSS} {text:priority} {string:message}" -appTags "APP_TAG_NAME_1,APP_TAG_NAME_2"
%JAVA_RUN% %XPOLOG_CONN% -api addLog -logName "LOG_NAME" -logPath "ROOT->FOLDER_1->FOLDER_2" -filesPath "c:\LogFiles
\messages{string}" -template "LOG_TEMPLATE_NAME"
Add Folder
Key Description Values
api The API type to use – must be addFolder “addFolder” Mandatory
folderPath The full path to the folder to be added ROOT = Top Folder (Folders and Logs) Use ‘->’ in the path between folders Mandatory
Example for adding an empty folder:
%JAVA_RUN% %XPOLOG_CONN% -api addFolder -folderPath "ROOT->FOLDER_1->FOLDER_2->FOLDER_NAME_TO_BE_ADDED"Remove Folder
Key Description Values
api The API type to use – must be removeMember “removeMember” Mandatory
folderPath The full path to the folder to be removed ROOT = Top Folder (Folders and Logs) Use ‘->’ in the path between folders Mandatory
Example for removing a folder (and all its contents):
%JAVA_RUN% %XPOLOG_CONN% -api removeMember -folderPath "ROOT->FOLDER_1->FOLDER_2->FOLDER_NAME_TO_BE_REMOVE
D"
Remove Log
Key Description Values
api The API type to use – must be removeMember “removeMember” Mandatory
logPath The full path to the log to be removed ROOT = Top Folder (Folders and Logs) Use ‘->’ in the path between folders Mandatory
Example for removing a folder (and all its contents):
%JAVA_RUN% %XPOLOG_CONN% -api removeMember -folderPath "ROOT->FOLDER_1->FOLDER_2->LOG_NAME_TO_BE_REMOVED"
License Parameters
Key Description Values
api The API type to use – must be license “license” Mandatory
files The path (relative to execution location or absolute path) to the license file which will be updated Text Mandatory
Example of applying a license:
%JAVA_RUN% %XPOLOG_CONN% -api license -files license.lic
AppTags Parameters
Key Description Values
api The API type to use – must be settings “settings” Mandatory
appTags A comma separated list of AppTags names (exactly as defined in XpoLog) Text Mandatory
(The time zone will be applied only if the general User Time Zone Mode in XpoLog is set to ''AppTags'')
timeZone A single time zone from JAVA available time zones Text Mandatory
(The time zone should be exactly as appears in the time zones list, in case XpoLog will not be able to find that given value,
default system time zone will be applied automatically.
Use "Default" to apply the time zone to the system default time zone)
Example of applying a time zone on an AppTag:
%JAVA_RUN% %XPOLOG_CONN% -api settings -timeZone "America/New_York" -appTags "APPTAG1, APPTAG2"
Restart Parameters
Key Description Values api The API type to use – must be restart “restart” Mandatory
Example of restarting XpoLog:
%JAVA_RUN% %XPOLOG_CONN% -api restart
Publish Patch Task Parameters
Key Description Values
api The API type to use – must be addPatch “addPatch” Mandatory
files The path (relative to execution location or absolute path) to the patch file Text Mandatory
type Patch type – must be “api” “api” Mandatory
Example of applying a patch:
%JAVA_RUN% %XPOLOG_CONN% -api addPatch -type api -files patch.zip
Comments:
1. Any value which contains the space character should be wrapped with “quotes”. For example if the display name of a user is TEST
USER then it should wrapped with quotes as:
... -displayName "TEST USER" ...
2. General Script Example: download here
Common Use Case:
1. Important: in all examples above we have used %JAVA_RUN% %XPOLOG_CONN% which is suitable to Windows environments. For
Linux, the SDK script should use $JAVA_RUN $XPOLOG_CONN
2. Automatically Add Servers to XpoLog:
Automating the process of adding new servers to XpoLog, mainly in dynamic environments such as clouds where servers are constantly
added and removed. The SDK provides full support so when a new machine is added, simply be a couple of commands the new server is
added to XpoLog and the required logs are be collected and available for the users. See Script Example: download here
3. Automatically Remove / Disable Servers in XpoLog:
a. Disable Server but keep data that was already collected
If a server is removed from the environment but you wish to keep the data that was already collected by XpoLog from that server,
simply disable the account of that specific server so XpoLog will not try to connect to it but will keep the data based on the
retention policy. See Script Example: download here
b. Remove Server and data that was collected from it
If a server is removed from the environment and you wish to remove it from XpoLog including all data then simply remove the
account of that server and the folder which contains all its logs. See Script Example: download here
Batch Configuration
A batch configuration allows performing a set of changes on the log configuration; its set of changes includes the Log Type and the Field Type.
Usage of Batch Configuration accelerates and automates the configuration process of logs added to or updated in XpoLog.
Administrators can perform the following actions related to batch configuration:
View Log Type/Field Type (see Viewing XpoLog Batch Configuration)
Add a new Log Type/Field Type (see Adding a new batch Configuration)
Apply a Log Type/Field Type on multiple logs (see Applying a Batch Configuration on Multiple Logs)
Adding a New Batch Configuration
You can define a new XpoLog batch configuration.
To add a new XpoLog batch configuration:
In XpoLog Manager, select the Tools > Batch Configuration menu item.
The Batch Configuration console opens, displaying an alphabetically sorted list of batch types. The first letters of the type names in the
list are highlighted in the Filtering Area above the list.
Quickly Adding a Log Type on Multiple Logs
The Filtering Area above the lists of log types enables quick navigation to a log type beginning with a specific letter. The letters which begin
the template names in the list are highlighted in the Filtering Area. This is a convenient feature in systems that have many batch types.
To quickly apply a log type:
In Filter List, write the first letters of the source and then the first letters of the log type
Click on Apply
Click on New
In the field box, write the name of the Log Type
Check the required logs in the tree
Click on Save.
Quickly Adding a Field Type on Multiple Logs
The Filtering Area above the lists of field types enables quick navigation to a field type beginning with a specific letter. The letters which begin
the template names in the list are highlighted in the Filtering Area. This is a convenient feature in systems that have many batch types.
To quickly apply a field type:
In Filter List, write the first letters of the source and then the first letters of the field type
Click on Apply
Click on New
In the field box, write the name of the Field Type
Check the required fields in the tree
Click on Save.
Applying a Batch Configuration on Multiple Logs
You can apply the batch configuration on multiple logs.
To apply the XpoLog batch configuration on multiple logs:
In XpoLog Manager, select the Tools > Batch Configuration menu item.
The Batch Configuration console opens, displaying an alphabetically sorted list of batch types. The first letters of the type names in the
list are highlighted in the Filtering Area above the list.
Quickly Applying a Log Type on Multiple Logs
The Filtering Area above the lists of log types enables quick navigation to a log type beginning with a specific letter. The letters which begin
the template names in the list are highlighted in the Filtering Area. This is a convenient feature in systems that have many batch types.
To quickly apply a log type:
In Filter List, write the first letters of the source and then the first letters of the log type
Click on Apply
Click on the required Log Type
Check the logs to be applied
Click on Save
Quickly Applying a Field Type on Multiple Logs
The Filtering Area above the lists of field types enables quick navigation to a field type beginning with a specific letter. The letters which begin
the template names in the list are highlighted in the Filtering Area. This is a convenient feature in systems that have many batch types.
To quickly apply a field type:
In Filter List, write the first letters of the source and then the first letters of the field type
Click on Apply
Click on the required Field Type
Check the fields to be applied
Click on Save
Viewing XpoLog Batch Configuration
You can view a listing of all batch configuration defined in XpoLog.
To view the XpoLog batch configuration:
In XpoLog Manager, select the Tools > Batch Configuration menu item.
The Batch Configuration console opens, displaying an alphabetically sorted list of batch types. The first letters of the type names in the
list are highlighted in the Filtering Area above the list.
Quickly Navigating to a Log TypeThe Filtering Area above the lists of log types enables quick navigation to a log type beginning with a specific letter. The letters which begin
the template names in the list are highlighted in the Filtering Area. This is a convenient feature in systems that have many batch types.
To quickly navigate to a log type:
In Filter List, write the first letters of the source and then the first letters of the log type
Click on Apply
Quickly Navigating to a Field Type
The Filtering Area above the lists of field types enables quick navigation to a field type beginning with a specific letter. The letters which begin
the template names in the list are highlighted in the Filtering Area. This is a convenient feature in systems that have many batch types.
To quickly navigate to a field type:
In Filter List, write the first letters of the source and then the first letters of the field type
Click on Apply
Geo Redundancy
XpoLog contains a mechanism that replicates configuration between A Primary and a Secondary sites in order to keep them synchronized for
disaster recovery purposes.
The Geo-redundancy mechanism is designed to create a replication of selected XpoLog''s configuration between a primary geographical location
and a secondary geographical location that XpoLog instance/cluster runs on. In the event of a complete regional outage or disaster in your
primary location, if users are redirected to the secondary site, the entire set of configuration will be available identically to the primary site.
XpoLog does not replicate raw data between sites or copies over the configuration files independently. Organizations that wish to activate and use
the Geo-redundancy mechanism should have in place a process that copies the raw data to be processed on both sites, and the zipped
configuration files created by the Geo-redundancy mechanism on each site to their designated locations on the other site.
The high level logic of the Geo-redundancy mechanism is as follows:
Each site contains an ''Inbox'' and ''Outbox'' directories. Based on the configured synchronization schedule, XpoLog creates a zipped configuration
file which is placed in the site''s ''Outbox'' directory, and checks if there is a new file in the site''s ''Inbox'' directory and merges its contents to its local
configuration. It is the organization''s responsibility on both sites to copy the contents of each site''s ''Inbox'' directory to the other site''s ''Outbox''
directory keeping the files last modified time.
Geo-redundancy Configuration
On each XpoLog instance/cluster there are 2 files in which determine the synchronization schedule and contents:
/.../XPOLOG_CONF/conf/general/configMerge.xml
This file determine the contents of configuration that will be included in the sync process between sites.
InputsInfo
filesToKeep - determines how many files XpoLog should keep in its ''Inbox'' directory.
Includes - the list of local configruation directories that will be included in the zipped configuration files that is created. By
default, all user oriented configuration (avaialble in the GUI) are included:
/.../XPOLOG_CONF/conf/general/addressbook.xml - connectivity accounts from the address book.
/.../XPOLOG_CONF/conf/logsconf/* and /.../XPOLOG_CONF/conf/modules/* - Folders and Logs.
/.../XPOLOG_CONF/conf/metadata/* - appTags.
/.../XPOLOG_CONF/conf/verifiers/* - monitors.
/.../XPOLOG_CONF/conf/usersenv/* - saved searches and user''s specifics.
/.../XPOLOG_CONF/conf/ui/apps/deploy/* - all applications (including their dashboards and gadgets).
/.../XPOLOG_CONF/conf/ext/templates/user/* - logs templates.
/.../XPOLOG_CONF/collection/conf/system/system.xml - collection policies.
Excludes - specify directories to exclude from the above list.
BinaryIncludes - handle binary configuration data. Currently relevant only to user''s specifics and user''s defined
templates.
OutputInfo
filesToKeep - determines how many files XpoLog should keep in its ''Outbox'' directory.
/.../XPOLOG_CONF/conf/general/configMerge.user.xml
This file determine the schedule and ''Inbox''/''Outbox'' directories location.
ScheduleOp - the scheudler settings which determines the frequency of executing synchronization. The only change should be
the cron expression.
Note: commenting the ScheduleOp object pauses the synchronization process (node(s) restart is required).
InputInfo
path - the absolute path to the site''s ''Inbox'' directory.
OutputInfo
path - the absolute path to the site''s ''Outbox'' directory
Geo-redundancy ActivationActivation of Geo-redundancy is divided to 2 steps. There is a need to run an initial replication (baseline) so both sites will be synchronized,
afterwards the on going synchronization process takes place.
It is mandatory to go through the baseline process and validate it prior to activating the on going sync in order to a valid synchronization process
and to avoid configuration duplications.
1. Geo-redundancy baseline creation
a. Prerequisite:
i. Primary site contains configuration and Secondary site is empty from all configuration.
ii. Disable Geo-redundancy on both sites.
iii. Stop XpoLog instance/cluster on both sites.
b. Manually copy the directories /files that are part of the sync from the Primary site to the Secondary site (override existing):
i. /.../XPOLOG_CONF/conf/general/addressbook.xml
ii. /.../XPOLOG_CONF/conf/logsconf/* and /.../XPOLOG_CONF/conf/modules/*
iii. /.../XPOLOG_CONF/conf/metadata/*
iv. /.../XPOLOG_CONF/conf/verifiers/*
v. /.../XPOLOG_CONF/conf/usersenv/*
vi. /.../XPOLOG_CONF/conf/ui/apps/deploy/*
vii. /.../XPOLOG_CONF/conf/ext/templates/user/*
viii. /.../XPOLOG_CONF/collection/conf/system/system.xml
c. StartXpoLog instance/cluster on both site and validate that all the configuration is available and identical.
2. On going synchronization first activation
a. Prerequisite:
i. Baseline procedure complete and validated.
b. Ensure the files /.../XPOLOG_CONF/conf/general/configMerge.xml and /.../XPOLOG_CONF/conf/general/configMerge.user.xml
are properly configured on each site with the correct local path of each site to its ''Inbox'' and ''Outbox'' directories, scheduler, sync
contents, etc.
c. Ensure on each site that its ''Inbox'' and ''Outbox'' directories are available and empty.
d. Enable Geo-redundancy on both sites.
e. Ensure there''s a process in place that copies:
i. Raw data from Primary site to Secondary site to the required location.
ii. Each site''s contents of ''Outbox'' to the other site''s ''Inbox'' and vice versa (keeping files last modified time).
3. Once the procedure is completed and active, all changes that will be made on any of the sites will be synced to other site.
Configuration changes of the same object in XpoLog should take place on one site only and the Geo-redundancy mechanism will sync
and merge them in the other site (keep in mind: if the same object is modified on both sites, the last change will be applied based on the
file''s last modified time).
Important:
User Guide
Starting XpoLog
Launching XpoLog
XpoLog can be installed on a Windows or Linux/Solaris machine, or deployed as a Web application (see Installation). After installation or
deployment, XpoLog runs automatically.
For subsequent uses of XpoLog, launch XpoLog as follows:
Under Windows or Linux/Solaris, in your browser, navigate to http://MACHINE_NAME:30303 or https://MACHINE_NAME:30443
Consult the system administrator to ensure the default ports presented above were not changed.
Logging In
If security is not activated in your organization (the default), launching XpoLog Center from your browser automatically starts XpoLog.
If security is activated in your organization, you are required to enter your credentials (username and password).
Default credentials are admin/admin and may be changed immediately after first time login.
To log into XpoLog Center:
1. Navigate to the XpoLog Center address in your browser (http://localhost:30303).
2. If security is activated in your organization, in the login page that appears, type your Username and Password, and then click OK.
The XpoLog Center homepage is presented. See XpoLog Homepage for a description of the homepage elements.Logging Out
A Logout button appears at the right side of the main menu on any XpoLog page of organizations where security is activated, i.e. where username
and password were required to log into XpoLog. In this case, clicking the Logout button logs you out of XpoLog Center.
XpoLog Homepage
Upon accessing XpoLog Center, the XpoLog homepage is displayed.
The homepage can also be accessed from any XpoLog console, by clicking the XpoLog logo on the top left side of the screen.
The main elements of the XpoLog homepage are described in the following table:
Element Description
Tab Bar On the left side, Apps, Search, and Analytics tabs. On the right side, the Manager tab:
Apps tab: Clicking this tab opens the Apps console (see XpoLog Apps).
Search tab: Clicking this tab opens the Search console (see XpoLog Search).
Analytics tab: Clicking this tab opens the Analytics console (see XpoLog Analytics).
Manager tab (right hand side): Clicking this tab opens the Manager''s console (see XpoLog Manager).
XpoLog logo - return to homepage
Notification A notification area with a red background, which includes important system notifications is displayed if necessary.
Bar For example cases of insufficient storage, license expiration, slowness, etc.
Main Pane Displays the dashboard that the system administrator configured to be displayed on the homepage. Each XpoLog user can
define the dashboard that appears on their homepage. See set a dashboard as the system home page
Quick Start During an evaluation period, a ''Quick Start'' div is displayed with different shortcuts to key operations in XpoLog:
Div
XpoLog Apps
An XpoLog App is a container that contains one or more dashboards. Each dashboard in the App is used to display visual or textual information
from the logs that exist in the XpoLog environment.
Apps and Dashboards simplify and expedite analysis of an Application or Environment. The Dashboards provide live visualization of the data to
quickly expose and understand faults and outages.
XpoLog has an engine that enables customizing multiple dashboards. For example, four dashboards can be defined under an App one for
displaying application problems, performance problems, network issues, and security.
Click the Apps tab to open the Apps console. XpoLog Custom Apps - create any application and dashboard on your data using XpoLog visualization tools.
XpoLog Apps Marketplace - select out of the box applications, a predefined set of dashboards, made for you by XpoLog team for a quick and
easy deployment of known system applications
Click the App that you wish to view its Dashboards. You can filter the list of available Apps by typing the name of the App in the search area
or sort the displayed Apps list by name, recently viewed or most viewed by simply clicking the desired option on the left menu.
XpoLog allows to open multiple Apps/Dashboards in a single browser session. Mouse over the
icon allows quick navigation between the open Apps/Dashboards:
Accessing the XpoLog Apps Console
To access the Apps console:
In the Tab Bar, click the Apps tab.
The Apps console opens, displaying the available Apps. See XpoLog Apps.
XpoLog DashboardsAn XpoLog Center Dashboard is a portal that contains gadgets. Multiple dashboards may be defined under an App context. Each gadget in the
dashboard is used to display visual or textual information from the logs that exist in the XpoLog environment.
Each gadget displays the data that the user requested to view in the gadget''s definition. For example, three gadgets can be displayed in a
dashboard for displaying search results, filtered logs and Analytics summary. Gadgets simplify and expedite performing searches and operations
on the log file. For example, instead of going each time to the XpoSearch search engine and running a search, you can define gadgets for
viewing these search results in different visual manners.
XpoLog has an engine that enables customizing multiple dashboards, each for a different purpose. For example, you can define four
dashboards – for application problems, performance problems, network issues, and security.
Each dashboard can contain multiple gadgets, with each gadget displayed in one of the available visualizations: Line Chart, Bar Chart, Column
Chart, Pie Chart, Data Table, Events List, etc. The gadgets can be organized within the dashboard in any of several predefined layouts. Also, any
gadget can be dragged and dropped to a preferred location on the dashboard page.
Each dashboard contains a header bar with the name of the dashboard and different actions on the dashboard (see Dashboard Options). Its
gadgets are arranged in the main pane in the layout selected in the dashboard definition.
To view a dashboard:
1. Click the relevant App under the XpoLog Apps console.
You can filter the list of available dashboards by typing the name of the dashboard in the search area
or sort the displayed dashboards list by name, recently viewed or most viewed by simply clicking the desired option on the left menu.
A summary of the defined dashboards under the selected App is displayed:
2. Click the desired Dashboard to view it.
The selected dashboard is opened. All its gadgets are displayed:
While viewing a dashboard it is possible to filter a dashboard view or change its time frame on demand
Dashboard Options
While viewing a dashboard it is possible to filter the view result and/or change the time frame.
Change a Dashboard Time Frame - For changing the default time frame of all gadgets in a dashboard. See Change Dashboard Time
Frame.
Filter Dashboard Result - For adding a constraint to the gadget''s queries on demand to filter the displayed result. See Filter Dashboard
Result.
Use Dashboard Inputs form (if available) - Use a visual form to select values that will reload the dashboard based on the selections. See
Dashboard Inputs.
You can manage a Dashboard in XpoLog by using the options accessible by clicking the
icon on the top right hand side of the Dashboard toolbar.
Available dashboard options are:
Add Gadget – For adding a gadget to the dashboard. See Managing Gadgets in Administrator''s Guide.
Edit Dashboard – For Editing the general settings of a dashboard. See Adding a Dashboard in the Administrator''s Guide.
Save Layout – If the Dashboard''s layout was modified, click this option to save it as the default layout of this dashboard.
Reset Layout - For resetting the Dashboard''s layout back to its default in the current display.
Export to PDF – For exporting the dashboard to a PDF file (see Exporting the Dashboard to PDF/CSV).
Export to CSV – For exporting the dashboard to a CSV file (see Exporting the Dashboard to PDF/CSV).
Set as Home Page – For details, see Setting a Dashboard as Homepage in Administrator''s Guide.
Copy Permalink – This option copies to the clipboard a direct link to the dashboard. The link can be then used externally to XpoLog to
present the dashboard''s result (for example in an iFrame in an external portal). 2 parameters which can be added to the link:
Login credentials - mandatory in case Security is active in XpoLog, a user and password with the credentials to view the
dashboard should be added: &autoLogin=true&username=[USER_NAME]&password=[PASSWORD]
Enable Zoom - optional, dashboard/gadgets contain links to zoom in back to XpoLog to see the result, by default the
zoom in links are presented. It is possible to add a parameter which determines this behavior: &allowZoom=false
or &allowZoom=true
Display in Black Theme - optional, by default the permalinks of dashboards will display the dashboards in white theme. It
is possible to add a parameter which set it to black theme: &blackTheme=true
Change Dashboard Time Frame
By default, when displaying a dashboard all gadgets will be loaded on the default time frame that was defined for each gadget.
In order to modify the time frame, click the
icon and select the time frame to be applied on the dashboard. Several options are available:
Default = the default time frame that was configured for the dashboard
Last = last from the actual execution time. For example last 1 day = from the current time until 1 day ago
Previous = previous referring to the actual execution time. For example previous 1 day = yesterday midnight to midnight
Today = Displays shortcuts to different fixed units
Date Range = Customized specific time frame
Live = Near real time execution of the dashboard - all gadgets will be cleared and will display current results, the gadgets will be
continuously updated while the dashboard is open
Custom Time = Custom Last/Previous values
All gadgets will be reloaded according the selection. Click the
icon again to modify or ''Default'' to restore to default time.Changing the Dashboard Layout
You can customize the look of your dashboard, by clicking the layout icon and choosing the arrangement of the gadgets on the dashboard page.
If changes were made to the layout, select Save/Reset layout from the dashboard''s options menu:
Save Layout – If the Dashboard''s layout was modified, click this option to save it as the default layout of this dashboard.
Reset Layout - For resetting the Dashboard''s layout back to its default in the current display.
Dashboard Inputs
Users Inputs provide an interface for users to supply values that effect gadgets search terms in a visual form. Typically, the inputs are displayed in
a checkbox, text area, dropdown menus or radio buttons.
The forms allow users to visually make selections which impact the underlying searches and focus only on points of interest while viewing
dashboard''s results. In order to configure inputs please refer to the Dashboards Settings in the Administrator Guide.
Inputs form (if available) is displayed at the top of a dashboard, select/configure values of interest in the inputs form and click Apply, the
dashboard will be reloaded displaying the correspondent results:
Example I: Checking the ''INTERNAL ERROR'' checkbox and clicking Apply reloads the dashboard where gadgets display the results filtered with
INTERNAL ERROR only
Example II: Specifying specific sources (servers in this example) and ''Day'' granularity and clicking Apply reloads the dashboard where gadgets
display the results from the selected servers only and on daily basis granularity.
Dashboard Theme
By default XpoLog presents the dashboards with a white background; it is also possible to present the dashboards with a black background.
Click the
icon to switch between themes.
Display Multiple Dashboards
When opening multiple dashboards to be displayed, it is possible to turn all open dashboards into a "slideshow".
Click the
icon and XpoLog will play all open dashboards in the selected speed (from every 5 seconds and up to 1 minute). Click the
icon to pause and review or the
icon to stop the slideshow and return to the dashboard''s main screen.
Exporting the Dashboard to PDF/CSVYou can export any dashboard to a PDF/CSV file, and then print it out or share it.
To export a dashboard to a PDF/CSV file:In the toolbar of the dashboard that you want to export, click the
icon, and from the menu items that appear, click Export to PDF/CSV. A Prepare Export to PDF/CSV notification box appears, informing that it is
preparing data for export. It is followed by an Export to PDF/CSV notification box. The PDF/CSV file is created and can be saved.
Note: If pop-ups are blocked, you are asked to click Continue to complete the export.
Filter Dashboard Result
When displaying a dashboard, all gadgets are loaded with the up to date result.
In order filter the displayed result, click the
icon to open the search query definition and enter the criteria that will be added to all gadget''s queries. Click the ''Search'' to activate the filter:
The filtered result is loaded:
Click the
icon to remove the filter and return to the default display.
Gadgets
A dashboard arranges its defined gadgets in the layout selected by the user.
Each gadget has a header bar, which displays the name of the gadget or the gadget type (the default; displayed if a name was not given to the
gadget), the time frame the gadget is defined on and the date and time that the gadget was last refreshed, and a menu icon
that can be clicked to perform actions on the gadget (see Gadget Options).The gadget result is displayed below the header bar, and if relevant, one of the following links appears below in the footer of the gadget:
View in Search – For performing a drilldown from the gadget results to the Search console
View in XpoLog – For performing a drilldown from the gadget results to the Search console
View in Analytics – For performing a drilldown from the gadget results to the Analytics console
Note: For over time display of events, if the resolution of a gadget cannot be displayed as is on screen, it is possible to zoom in by selecting a part
of the graph to reduce resolution.
Gadget Options
You can manipulate and perform actions on any gadget by accessing the available options using the menu
icon in the right hand side of the Gadget header bar.
Gadget options are:
Edit – For editing a gadget definition (see Managing Gadgets in the administrator''s guide).
Duplicate – For duplicating a gadget definition and creating a new one (see Managing Gadgets in the administrator''s guide).
Delete – For removing the gadget from the dashboard; Please see Administrator''s Guide for details.
Generate Now – For forcing a data generation and update the gadget result.
Pin to Top / Unpin from Top – For placing a gadget at the top of the dashboard or releasing a gadget which was pinned to top of the
dashboard.
Export to PDF – For exporting a gadget to a PDF file (see Exporting a Gadget to a PDF/CSV File).
Export to CSV– For exporting a gadget to a CSV file (see Exporting a Gadget to a PDF/CSV File).
Copy Permalink – This option copies to the clipboard a direct link to the gadget. The link can be then used externally to XpoLog to
present the gadget''s result (for example in an iFrame in an external portal). 2 parameters which should be considered to be added to the
link:
Login credentials - mandatory in case Security is active in XpoLog, a user and password with the credentials to view the
dashboard should be added: &autoLogin=true&username=[USER_NAME]&password=[PASSWORD]
Enable Zoom - optional, dashboard/gadgets contain links to zoom in back to XpoLog to see the result, by default the zoom in
links are presented. It is possible to add a parameter which determines this behavior: &allowZoom=false or &allowZoom=true
Relocating a gadget - it is possible to drag and drop gadgets to a different location within a dashboard (see Relocating a Gadget).
Exporting a Gadget to a PDF/CSV File
You can export any gadget to a PDF/CSV file, and then print it out.
To export a gadget to a PDF/CSV file:
1. In the header bar of the gadget that you want to export, click the Tools icon, and from the menu items that appear, click Export to
PDF/CSV.
A Prepare Export to PDF/CSV notification box appears, informing that it is preparing data for export. It is
followed by an Export to PDF/CSV notification box.
The PDF/CSV file is created and can be saved.
Note: If pop-ups are blocked, you are asked to click Continue to complete the export.
Relocating a Gadget
You can move a gadget to a different location in the dashboard.
To relocate a gadget:
1. Mouse over the gadget, the relocate icon
is presented.
2. Using the relocate icon, drag and drop the gadget to its new location in the dashboard.
Out of the box Apps
XpoLog out of the box Apps are predefined packages of searches, rules, monitors, visualizations and other knowledge about a specific system.
The packages are packed as Apps and are available either in the XpoLog setup or by updating the XpoLog instance with the latest Apps updates.
In order to start using out of the box apps you need to follow few simple steps:
1. Add and prepare data from a system that have an out of the box apps. View the list of out of the box Apps here.
2. Deploy the out of the box app on the data set.
3. Start using the App.
Further information about how to add and prepare data for Apps can be found here: Prepare Data for XpoLog AppsYou can view here the Out of the box Apps list.
App - Apache Httpd (2.2)
Name Apache Httpd
Versions 2.2
Web www.apache.org
Type Web Servers
logtypes httpd
logtypes access, w3c, error
In order to deploy Apache Httpd App use the following page to prepare the log data - Preparing Apache Httpd Data.
Deploying the App
1. Deploy the Apache Httpd App available in the XpoLog setup or by getting the App package from XpoLog website.
2. Once the App is successfully deployed (by default) all logs tagged in logtype: httpd will be included App. To change that simply edit the
App and specify which httpd logs to include or exclude.
Open and Use the App
1. Click on the deployed App
2. When the App will open you will see a list od available predefined dashboards. In each dashboard you can find a set of visualization
gadgets, rules and searches that analyze the httpd logs.
Httpd Dashboards and Gadgets
Dashboard Name: Http Errors
Description: Collection of gadgets analyzing different dimension of the log data using the http response status code as a pivot for the analysis.
Collection of gadgets that summarize the error code distribution, resources and referrers causing or leading to the errors and etc.
Gadgets and Inputs: Inputs are available in order to focus the gadgets on specific errors, servers, user ip or URL in order to drive better
understanding of root cause in the web applications.
Required field types: referer, respstatus, remoteip, requrl, useragent
App - Linux
Name Linux
Versions N/A
Type Operating System
logtypes linux, linux-cron, linux-mail, linux-messages
In order to deploy the Linux App use the following page to prepare the log data - Preparing Linux Event Logs Data.
Deploying the App
1. Deploy the Linux App available in the XpoLog Linux setup or by getting the App package from XpoLog website.
2. Once the App is successfully deployed (by default) all logs tagged in logtype: linux, linux-cron, linux-mail, linux-messages will be included
in the App. To change that simply edit the App and specify which logs to include or exclude.
Open and Use the App1. Click on the deployed App
2. When the App will open you will see a list of available predefined dashboards. In each dashboard you can find a set of visualization
gadgets, rules and searches that analyze the Linux event logs.
Linux Dashboards and Gadgets
The Linux application contains a set of dashboards:
Overview - a general overview of the Linux environment including event sources, login status, and security status.
Events Sources - a console that enables events view from selected servers/domains/logs
Activity - logging activity of servers and processes over time last 1 day vs. last 7 days
Login Status - users activity review such as logons over time, success vs. failure authentication, failed logins, etc.
Problems & Errors - a report of applications problems by event/host
Cron - a console for the cron activities.
Mail - a console for the mail activities.
Use the user inputs while viewing a dashboard to filter the view to the desired values such as servers, logs, processes, etc.
App - Log4J (1.2)
Name Apache Log4J
Versions 1.2
Web www.apache.org
Type Logging Technology
logtypes log4j
In order to deploy Apache Log4J App use the following page to prepare the log data - Preparing Apache Log4J Data.
Deploying the App
1. Deploy the Apache Log4J App available in the XpoLog setup or by getting the App package from XpoLog website.
2. Once the App is successfully deployed (by default) all logs tagged in logtype: log4j will be included App. To change that simply edit the
App and specify which log4j logs to include or exclude.
Open and Use the App
1. Click on the deployed App
2. When the App will open you will see a list of available predefined dashboards. In each dashboard you can find a set of visualization
gadgets, rules and searches that analyze the log4j logs.
App - Log4Net (2.0)
Name Apache Log4net
Versions 2.0.*
Web www.apache.org
Type Logging Technology
logtypes log4net
In order to deploy Apache Log4Net App use the following page to prepare the log data - add log4net data
Deploying the App
1. Deploy the Apache Log4Net App available in the XpoLog setup or by getting the App package from XpoLog website.
2. Once the App is successfully deployed (by default) all logs tagged in logtype: log4net will be included App. To change that simply edit
the App and specify which httpd logs to include or exclude.Open and Use the App
1. Click on the deployed App
2. When the App will open you will see a list of available predefined dashboards. In each dashboard you can find a set of visualization
gadgets, rules and searches that analyze the log4net logs.
App - Windows
Name Microsoft Windows
Versions N/A
Type Operating System
logtypes windows, windows-application, windows-security, windows-system
In order to deploy the Windows App use the following page to prepare the log data - Preparing Windwos Event Logs Data.
Deploying the App
1. Deploy the Microsoft Windows App available in the XpoLog Windows setup or by getting the App package from XpoLog website.
2. Once the App is successfully deployed (by default) all logs tagged in logtype: windows, windows-application, windows-security,
windows-system will be included App. To change that simply edit the App and specify which logs to include or exclude.
Open and Use the App
1. Click on the deployed App
2. When the App will open you will see a list of available predefined dashboards. In each dashboard you can find a set of visualization
gadgets, rules and searches that analyze the Microsoft Windows event logs.
Windows Dashboards and Gadgets
The Windows application contains a set of dashboards:
Overview - a general overview of the Windows environmnet including requiered restarts, updates errors, policy changes, etc.
Events Viewer - a console that enables events view from selected servers/domains/logs
Events Statistics - general statistics of top used sources, categories, types and event codes
Audit - a high level analysis of top applications, sources, users operations, events, etc.
Trends - logging activity of servers and logs over time last 1 day vs. last 7 days
Users Overview - users activity review such as logons over time, top users operations report, logons vs. logoffs, etc.
Application Installs - a report of total installed applications failed and successful
Application Crashes - a report of applications crashes by event/host
Use the user inputs while viewing a dashboard to filter the view to the desired values such as servers, domains, accounts, etc.
XpoLog Search
Overview
Data is constantly entering your system''s IT infrastructure from various sources. This data is of all types – performance data and statistics, traps
and alerts, log files, configurations, scripts and messages, and arrives from various sources – your logs, folders, applications, network devices,
database tables, and servers.
XpoLog indexes in real time all data entering your system''s IT infrastructure from various sources, and structures and normalizes this data – both
raw and rich, into a single database of a structured format.
XpoLog provides a search engine – XpoSearch, which enables you to conduct a search through this immense amount of data for anything that
you like. Using the XpoSearch interface, you can search all the logs in XpoLog Center (applications, servers, network devices, database tables,
and more).
Search Types
XpoSearch provides two main types of searches:
Simple search – initial search, using simple search syntax, which results in a list of matching events
Complex search – an advanced search, using complex search syntax, which results in a summary table of matching events, or
transactionsSearch Stages
A search can be run in three stages:
Initial search
Refined search
Complex search
Initial Search
In the initial search, the user enters a search query of simple criteria, and the search runs on all the event data. In this simple search, the user can
search the event data for a simple term or more than one term, run a Boolean search, a search with wildcards, or a column-based search.
Running the search query returns a list of all matching events from all relevant logs (latest on top). In addition, XpoSearch returns a graphical view
of the distribution of the matching events over time and per data source.
Refined Search
The resulting events of a simple search can be minimized by refining the search results using either or both of the following methods:
Filtered Search – filtering the resulting events according to the source of the event – logs, files, applications, or servers
Analytics-based Search – adding one of the event data fields discovered during the simple search to the search criteria of the simple
search
Complex Search
Complex search queries are used to perform advanced complex operations and reporting on the log events resulting from a simple
search. Running a complex search query results in a summary table, and can also be visualized as gadgets in XpoLog Dashboards.
Accessing the XpoLog Search Console
You can access XpoLog Search from any page in the application.
To access the search console:
In the Tab Bar, click the Search tab.
The Search console opens, displaying the latest search query. See Search User Interface Elements.
Search User Interface Elements
XpoLog Search is equipped with a user friendly graphic user interface (GUI), which provides a complete set of tools to search for event data that
meets specific criteria.The Search user interface includes the following main elements:
Element Description
Tab Bar Tabs for accessing the XpoLog, Search, and Analytics applications.
Main Menu Includes menu items and submenus for performing actions in the XpoLog applications. Available menus are:
Dashboards
Administration
Search Area for entering the search query and the time interval for running the query.
Query
Panel In addition, this panel includes the following feature:
Open the search options window button – Clicking this menu opens a window with links to four windows: Search History
, Saved Searches, Simple Search Syntax, and Complex Search Syntax.
Graph Area Displays two graphs:
Main graph – a graphic distribution of the events resulting from the search query over time.
The x-axis presents indications on the errors detected by XpoLog Analytics engine based on severity and number of
occurrences (red=high, orange=medium, yellow=low / the size of the icon represents the relative number of occurrences in
that time period compared to other periods)
Zoom-in graph – shows zoom-in area with respect to original search context.
In addition, this panel includes the following feature:
Actions Items – you can perform the following actions: Save Search, Save Monitor, Save Gadget, Export Result to
PDF/CSV, and Share Search (generates a link that can be shared of the exact query and time which was executed)
Visualization Buttons – Clicking these buttons presents the search results in different way - line (aggregated/split per
source), bar (aggregated/split) and pie charts.Augmented Enables refining your simple search results. Includes the following sections:
Search
Pane Active Filters - active filters are stored and may be removed to return to a previous view.
Isolate Results - select source(s) in order to isolate results based on log/folder/application/server.
Analytics Insight - a list that centralizes all the Analytics results related to the current search results. Click a suggestion to
see where it appears over time and to add it to the search query.
Interesting Fields - a list of logs columns which different complex functions are available on. Click a column name to see
which functions may be activated on it.
Search In the case of a simple search, displays all the events that match the search query.
Results
Area In the case of a complex search, displays a summary table of the events that match the search query.
Mouse Over Events in the search results area presents 2 options on the highlighted phrase:
Search Actions - add/exclude/replace the phrase from the current search.
Data Markers - pick a color to paint the highlighted phrase across all results.
Search Query Panel
The Search Query Panel user interface includes the following elements:
Element Description Open the Search Options window icon
Clicking this icon displays a window with four links:
Search History – Clicking this link opens a window that displays your
recent and popular searches.
Saved Searches – Clicking this link opens a window with a listing of
the names of the searches that you saved.
Simple Search Syntax – Clicking this link opens a window, which lists
the syntax that you can use to formulate a simple search.
Complex Search Syntax – Clicking this link opens a window, which
lists the syntax that you can use to formulate a complex search.
It also includes a Close icon for closing the Search Options window.
Auto-complete section
When typing a query, the auto-complete section opens. The left side
provides suggestions on relevant syntax and sources to run the search on.
The right side presents Search History and Saved Searches available for
selection.Search Status icon
/
indicates that the search is in progress;
indicates that the search is complete.
Search Query Area for typing a simple or complex search query, or for activating a saved
search query. This section will dynamically be expanded when long queries
are typed.Actions
Action items are presented below the search query panel after a search
has completed. You can click an action that can be performed on the
search query:
Save Search – Selecting this item saves the search query in the
system.
Save Gadget – Selecting this item saves the search query as a
gadget.
Save Monitor – Selecting this item saves the search query as a
monitor.
Export to PDF/CSV – Selecting this item saves the search query and
results in a PDF/CSV file.
Close Augmented Search / Open Augmented Search buttons.
By default, the Augmented Search Pane is open. Clicking the
/
button closes the pane; clicking the
button opens the pane.
Time Period Defines the time period during which the search is to be run.
Selectable time periods include:
The entire time that the log exists: All Time
Predefined time periods:Today, Yesterday, This Week, Last Week, L
ast 15/30/60 Minutes, Last 3/12/24 hours, Last 7/14 days, Last 1/3
months
Live: Real time search mode will be activated and new records, that
match the search criteria will be loaded to the screen (see below)
Customized time periods: Custom. Live Search XpoLog Search provides Live mode search (near real time). The Live mode
may be activated by selecting it from the list of time period options. Once
selected the graph area will be cleared and a red button
will be presented to indicate Live is active. New records which matches the
query criteria will be loaded to the screen every few seconds.
Clicking this button after typing a search into the Search Query
commences the search.
While a search is being executed the pause button will be presented
click it to pause the search and then resume to continue the search
Note: The Search button does not have to be clicked after entering a
saved search or a search from history into the Search Query, changing the
time period, or performing an augmented search from the Augmented
Search Pane. In these cases, the search runs automatically.
Simple Search Syntax
The following table summarizes the simple search syntax:
Type Description
Boolean AND – A and B matches events that contain A and B.
OR – A or B matches events that contain A or B.
NOT – A and NOT (B or C) matches events that contain A but not B or C.
Quotation Marks Used to get an exact match of a term. Recommended when there is a key word (such as ( ), =, and, or, not, i
n, *, ?) within a searched term.
Example: "connection(1234) failure" -> returns events with an exact match to connection(1234) failure.Parentheses Used to unify a term result or to create precedence within search queries.
Examples:
a or (b in folder.my_folder) -> searches for events that contain a, or events that contain b in sub folders and
logs under the folder my_folder.
a or b in folder.my_folder -> searches for events that contain a or b in sub folders and logs under the folder
my_folder.
a and b or c -> precedence to the key word and, this term is equivalent to (a and b) or c.
a and (b or c) -> precedence to b or c; its result and a.
Wildcards May be placed anywhere in a search term:
* – *foo, foo*, f*oo, *foo*, *f*o*o* (* represents any characters, 0 or more times)
? – ?oo, fo?, f? o (? represents any character, exactly one time)
Search in a specific Searches for a term in a specified log, folder, application, or server.
log/folder/application/server
Examples:
error in log.my_log -> searches for error only in logs whose name is my_log.
error in log.my* -> searches for error only in logs whose name starts with my.
error in folder.my_folder -> searches for error only in logs under folders whose name is my_folder.
error in folder.my* -> searches for error only in logs under folders whose name starts with my.
error in host.my_host -> searches for error only in logs whose source name is my_host.
error in host.my* -> searches for error only in logs whose source name starts with my.
host.my_host is equivalent to server.my_host.
error in app.my_app -> searches for error only in logs associated to applications whose name is my_app.
error in app.my* -> searches for error only in logs associated to applications whose name starts with my.
app.my_app is equivalent to application.my_app.
Column-based Search Searches for events that have a specific value in a specific column of the log.
Examples:
column_name=search_value -> searches for events that have a column named column_name whose value
is equal to search_value (relevant only for logs that have a column with that name).
column_name=search_value in log.my_log -> searches for events in the log my_log that have a column co
lumn_name whose value is equal to search_value (relevant only if the log has a column with that name).
column_name contains search_value -> searches for events that have a column named column_name wh
ose value contains search_value (relevant only to logs that have a column with that name).
column_name contains search_value in log.my_log -> searches for events in the log my_log, which have
a column column_name whose value contains search_value (relevant only if the log has a column with that
name).
Regular expression search Searches in events for values represented by regular expressions.
Example:
regexp:\d+ in log.access -> searches for numbers in events.
Activate saved search Activates a search that you previously saved.
search.search_name -> runs the saved search called search_name.
Graph Area
The Graph Area user interface includes the following elements:
Element Description
Graph Includes icons for defining the graph visualization and content.
toolbar
Main A graph that displays the distribution of events over time. The search query timeline is the x-axis, and the number of matching
Graph events is the y-axis.
The main graph shows the distribution of events matching the search query over the selected timeline. Any time period in the graph
can be zoomed into, by selecting a time on the time axis, and dragging left/right the vertical line that appears on the graph, to the
beginning/end of the time period to be zoomed into. A Zoom Out button appears on top of the zoomed-in graph, enabling return to
the original search results.
Zoom-In Below the main graph. Displays zoomed-in area with respect to the original search context. For example, zooming in on one day of
Graph the last 7 days, shows in this graph all seven days, with the zoomed-in day shaded in blue. Can zoom into a time period from this
graph (see explanation in Main Graph).
Graph Toolbar
The right side of the Graph toolbar includes the following buttons for defining the type of graph to generate:
Element DescriptionDisplay Line Chart button.
Clicking this icon displays a line graph of the event distribution. Mouse over
this icon presents below different line charts based visualization options:
Display Summary View button. Clicking this button displays
a summary view of the event distribution from all
the logs/applications/servers.
Display Summary View button. Clicking this button displays
the event distribution originating from each log (default),
application, or server (according to what you selected in the
Distribute by selection box described in this table).
Display Column Chart button.Clicking this button displays a column (bar) graph of the event distribution.
Mouse over this icon presents below different line charts based
visualization options:
Display Summary View button. Clicking this button displays a column chart
in summary view.
Display Split View button. Clicking this button displays a column chart in
split view, i.e. a vertical line for each log (default), application, or server
(according to what you selected in the Distribute by selection box described
in this table).Display Stack View button. Clicking this button displays a column chart in
stack view, i.e. a horizontal bar for each log (default), application, or server
(according to what you selected in the Distribute by selection box described
in this table).
Display Pie Chart button.
Clicking this button displays a pie chart of the event distribution.Display Geo Location Map button.
This button is present only when Geo Location related queries are
executed. Clicking this button displays a Geo Location Map of the event
distribution.
The left side of the Graph toolbar includes the following buttons for defining additional views on the graph. The available options depend on the
selected type of graph:
Element Description
Augmented layers selection box.
Enables selecting the types of problems to be represented by the
dots on the time axis of the main graph:
Predefined – problems predefined by the user (using the
saved search mechanism)
Autodetected – problems automatically detected by
XpoLog Analytics engine
Both predefined and auto-detected problems
Pie, Line, and Column Charts
Distribute by selection box.
Enables selecting according to what entity to distribute the
information resulting from the search: Total, Logs, Applica
tions, or Servers.
Selecting Logs/Applications/Servers changes the graph
accordingly, and displays above the graph a color-coded legend
of the different log/application/server names in the system.
Search Results Area
The contents of the Search Results Area depends on the type of search that you run:
For a simple search, each result event meeting the search criteria is displayed.
For a complex search, a summary table of the result events or transcations are displayed.
Simple Search Results Area
The following table describes the user interface of the Simple Search Results Area:
Element DescriptionSearch Results Summary Panel A panel that summarizes the results of the search, and provides navigation to the result event pages.
Events Toolbar Includes icons for expanding/collapsing events and for disabling/enabling Analytics.
Events Area A list of the events that match the search query.
Search Results Summary Panel
The Search Results Summary Panel includes the following details:
Element Description
Search Summary In the case of a simple search, displays the number of matching log events,
the number of source logs of these events, and the period of time
searched.
In the case of a complex search, displays the number of results in the
table, the number of events that the results are based on, the number of
source logs of these events, and the period of time searched.
Previous matching events icon.
Clicking this icon displays in the Result Page Navigation Area, the numbers
of the previous ten pages, and displays the first of these pages in the
Search Results Area.Next matching events icon.
Clicking this icon displays in the Result Page Navigation Area the numbers
of the next ten pages, and displays the first of these pages in the Search
Results Area.Results Page Navigation Area Displays the page numbers of up to ten pages of results. You can display
the previous/next ten page numbers, by clicking the
/
icons.
Clicking a page number displays that page of results in the Search
Results Area. The current page number is highlighted in white.
Events Toolbar
The Events Toolbar includes the following elements:
Element DescriptionExpand all Events / Collapse all Events icons.
Clicking the
/
icon expands all events to display all their column names and respective
values; clicking the
icon collapses all events to show only some of the column names and
respective values.Disable Analytics / Enable Analytics icons.
Clicking the
/
icon disables Analytics for all events; clicking the
icon enables Analytics for all events.
Events Area
The Events Area includes a list of events resulting from the search, where each event contains the following elements:
Element Description
Event timestamp The date and time that the event occurred, in the format MM/DD/YYYY
HH:MM:SSAnalytics layer If Analytics is active, colors the fonts of the column values that Analytics
detects as problematic, according to the following color-coding:
Red – high severity problem
Orange – medium severity problem
Yellow – low severity problem
Under the timestamp, displays the severity of the most severe column
value detected by Analytics in the event: high, medium, low, or none.
Event structure The structure of the event, including its column names and respective
column text, in the format ([COLUMN_NAME] COLUMN_TEXT).
Event source fields Shows the source of the event – the log, server, and/or applications which
generated the event. Mouse over on the log source indicator [Log] presents
the full path of the source log that this message originates from.Expand Event icon.
Appears at the end of an event that can be expanded to show all its column
names and respective values.
Clicking the
icon expands the event to display all its column names and respective
values, and changes the icon to the Collapse Event icon
, so that it can be shortened at a later time.Mouse Over Options Mouse over on search results (and columns names) presents two optional
action:
Search Actions - Clicking this icon presents a list of possible search actions
on the highlighted phrase: append to current search using AND, append to
current search using OR, excluding from current search, replacing the
current search.
Data Markers - Clicking this icon presents colors to be selected in order to
mark the highlighted phrase.
Complex Search Results Area
Complex Search Results Area
The following table describes the user interface of the Complex Search Results Area:Element Description
Search Results A panel that summarizes the results of the search, and provides navigation to the results pages; same as panel in Simple
Summary Panel Search (see detailed description in Search Results Summary Panel section in Simple Search Results Area).
Summary Table Displays a table that summarizes the results of the complex search.
The results are clickable - clicking a result will present the log records related to that result.
Transactions Displays a list of transactions matching the transaction search criteria.
List
Augmented Search Pane
The following table describes the user interface of the Augmented Search Pane:
Element Description
Active Lists all the suggestions that have been added to the original query, from the suggestions in the resulting events and on the
Filters graph. Any suggestion can be removed from the query by removing it from the Active Filter list.
Isolated Lists the sources of the events: Logs, Folders, Servers, and Applications, and displays in parentheses near each source type, the
Results number of events in that source type.
Selecting any of these source types opens a table with a listing of its sources, enabling you to select the checkboxes of the
sources by which to filter the events.
Analytics Lists in order of importance, the field values that have been detected by Analytics as being problematic.
Insight
Clicking Load more... on the bottom of the list, displays the ten next most important field values.
Selecting one of thes field values, enables you to refine your simple search results, by selecting one of the following:
Append to query with And – searches for events that meet the criteria of your original search query, and also contain the
field value selected from the Analytics Insight list.
Append to query with Or – searches for events that either meet the criteria of your original search query, or contain the
field value selected from the Analytics Insight list.
Replace query – replaces your original search query, with a search for events with the field value selected from the Analyti
cs Insight list.
Interesting Lists in order of importance, the fields that appear most frequently in the events.
Fields
Clicking Load more... on the bottom of the list, displays the next most interesting fields.
Selecting one of these fields opens a list of functions that can be performed on the field. Selecting a function from this list initiates
a complex search composed of the selected field and function, on the results of the simple search .
Performing a Simple Search
XpoSearch enables you to retrieve specific events from indexed event logs, by creating a search query using the XpoSearch search syntax, and
then running the search. This is an extremely useful tool for investigating the cause of problems in your system. Also, you can limit any search to
events that occurred during a specific time period.
Selecting the Search Time Period
Time plays a very important role in the examination of the cause of a system problem.
Although you can run a search on events that occurred at any time, this wastes system resources, and usually results in an
overwhelming number of events that are difficult for you to manage and analyze.
Therefore, XpoSearch enables you to run a search on a specific time period, so that you can narrow your results, and facilitate
determining the root cause of the problem. You can select a predefined time period, or customize the time period by selecting
the start and end dates and times of the time period.
To select the time period of the search:
1.1. In the Search Query Panel, in the Search Time Range textbox, click the down arrow
.
A list of selectable time periods opens.
2. From the list of time periods, select a predefined time period (All Time (all times in the log), Last 15 Minutes, Last 30
Minutes, Last 60 Minutes, Last 3 Hours, Last 12 Hours, Last 24 Hours, Last 7 Days, Last 14 Days, Last 1 Month,
Last 3 Months, Last Week, This Week, Yesterday, or Today), or select Custom to specify your own time period (see
Customizing the Search Time Period for a detailed explanation on customizing the time period).
The selected time period is displayed in the textbox, and the search runs on this time period.
Creating a Search Query
You can create a search query using the search syntax supplied by XpoLog for simple searches:
Simple terms search
Boolean search
Search with wildcards
Comparison search
Search in a specific log, folder, application, or server
Activate a saved search by its name
Searching for Simple Terms
The simplest type of search is one that searches for terms in your log events. This includes the following:
Searching for a single word that appears anywhere in the event.
Example: Typing error searches for all events containing the word error.
Searching for two or more words that appear in an event, exactly in the order that you typed them.
Example: Typing error log only searches for events having the words error and log adjacent to each other in the event.
Searching for keywords in an event – by enclosing the words in quotes. These keywords can be Boolean operators or saved words.
Example: If you want to search for the word NOT in an event, and do not want it to be misinterpreted as the Boolean operator NOT, you
should enclose it in quotes: "NOT".
XpoSearch also provides the autocomplete feature. As you type the search query, a dropdown list of other search queries that you have created
in the past and that begin with these characters is displayed, as relevant. If one of these search queries is the one that you want to run, you can
simply select it instead of retyping the entire search query.
Boolean Search
XpoLog provides three Boolean operators for your use: OR, AND, NOT, evaluated in a search query in that order. These operators must be
capitalized. It is also possible to change the default order of precedence, by enclosing in parenthesis the part of the search term that you want to
perform first.
Example: Searching for end process OR start process returns all events containing either the phrase end process or the phrase start
process.
Note: If you want to search in an event for any words that are the same as Boolean operators, you should enclose them in quotes, so that they
are not misunderstood for the Boolean operator.
Searching with Wildcards
XpoSearch provides two wildcards:
? – used in a search term to represent a single alphanumeric character.
Example: Typing http ?00 retuns http 100, http 200, ...,and http 900. It does not return http 2000, as the ? only replaces a single
character.
* – used in a search term to represent zero to any number of any alphanumeric characters. A search term which only includes an * return
s all events, up to the maximum allowed by the system.
Example: Typing http *00 returns all events beginning with http and ending with 00, such as http 300, http 3000, and http 500.
Searching in a Specific Log/Folder/AppTag/Server
XpoSearch enables searching events in all event logs of the system, regardless of their source, or only in event logs that come from a specific
source, as follows:Log – a specific log
Folder – logs in a specific folder
AppTag – logs of a specific application
Server – logs from a specific server
Examples:
1. Running a search for error in log.my_log returns events only from the log named my_log that include the word error, regardless of
where this log resides.
2. Running a search for error in log.X in folder.Y returns events only from event log X that resides in folder Y.
3. Running a search for error in log.X, log.Y returns events from event log X and event log Y, regardless of where they reside.
Column-based Search
You can run a column-based search on event data, to extract only those events which have a specific column that meets the comparison criteria.
This is done by creating a search that compares a specific column to a specific value, using the comparison operators defined in the following
table.
Operator Definition
= Equals
column_name = x returns all events with column_name value exactly equal to x.
!= Not equals
column_name != x returns all events with column_name value not equal to x.
> Greater than; for numerical fields only
column_name > x returns all events with column_name value greater than x.
< Less than; for numerical fields only
column_name < x returns all events with column_name value less than x.
contains Used for checking if a column contains a specific value
column_name contains x returns all events that contain in column_name the value x.
NULL Used to find empty or populated columns
column_name = NULL returns all events that have no value in column_name.
column_namereturns all events that have a value in column_name.
NOT Used to exclude events that have a specific value in a specific column
NOT (column_name contains error) returns all events that do not have error in column_name.
Example: Typing Priority != Error returns all events that do not have the value Error in the Priority column.
Regular Expression Search
XpoLog enables you to search in events for values represented by a regular expression that you specify.
Example: Typing regexp:\d+ in log.access searches for numbers in events.
Activating a Saved Search
XpoLog enables you to save any search query so that you can easily run it at a later time. You can either activate the saved search by selecting
its name from a list of saved searches (see Running a Saved Search) or you can type search.search_name in the search query to run the saved
search called search_name.
Example: Typing search.error_search activates the saved search named error_search.
Customizing the Search Time PeriodYou can customize the time period of a search, by selecting from calendars the beginning and end dates and times of the time period.
To customize the time period:
1. In the Search Query Panel, in the Time Period selection box, select Custom.
Two calendars – one of the start date and one of the end date of the previous search query time period are displayed.
2. In the left calendar, repeatedly click the arrows at the left and right of the month name, to scroll to previous/following months, until you
reach the desired month of the start date. Then, in the calendar, click the desired start date.
The day is highlighted in the calendar, and is displayed below the calendar in Start Date.
3. In Start Time, type the time of day that the time period begins.
4. In the right calendar, repeatedly click the arrows at the left and right of the month name, to scroll to previous/following months, until you
reach the desired month of the end date. Then, in the calendar, click the desired end date.
The day is highlighted in the calendar, and is displayed below the calendar in End Date.
5. In End Time, type the time of day that the time period ends.
6. Click Go.
The search runs on the selected time period, returning in the Search Results Area, the results of the search for the customized time
period.
Note: The Time Period box displays Custom.
Alternatively - select the part in the graph which you want to zoom into and the console will updated to the selected time frame.
Simple Search Examples
The following table contains examples of simple search queries:
Query Explanation
* Searches in all logs for all log events.
Information Searches in all logs for log events that contain the term Information.
Service Control Manager Searches in all logs for log events that contain the phrase Service Control Manager.
“error is not caused by database” Searches in all logs for log events that contain the exact phrase error is not caused by
database.
Note: Quotes are usually used when the search term/phrase contains a saved word or one
of the following key words used by the search syntax: ( ) = and or not in * ?
If this search query would not be enclosed in quotes, it would be misinterpreted as (error
is) not (caused by database).
error or exception Searches in all logs for log events that contain the term error or exception.
error or exception or fail* Searches in all logs for log events that contain the term error or exception or any
word beginning with fail (such as fail, fails, failed, failure)
Service Control Manager OR Searches in all logs for log events that contain either of the following phrases: Service
Microsoft-Windows-Security-Auditing Control Manager or Microsoft-Windows-Security-Auditing.
Service Control Manager AND WinHTTP Searches in all logs for log events that contain the phrase Service Control Manager and
the term WinHTTP.
Service Control Manager AND NOT WinHTTP Searches in all logs for log events that contain the phrase Service Control Manager but do
not contain the term WinHTTP.
Service Control Manager and NOT (WinHTTP Searches in all logs for log events that contain the phrase Service Control Manager but do
OR Multimedia) not contain the term WinHTTP nor the term Multimedia.
703? Searches in all logs for log events that contain the term 703, followed by a single character
Note: The ? symbol stands for any single character that appears in its location in the term;
for example:7030, 7031, and 703A. The ? symbol can be placed anywhere in the search
term (i.e. ?703, 70?3, 703?).
Ser* Wild card usage; Searches in all logs for log events that contain the term Ser, followed by
zero or more characters.
Note: The * symbol stands for zero or more characters that appear in its location; for
example: Ser, Server, and Service. The * symbol can be placed anywhere in the search
term (i.e. *Ser, Se*r, and Ser*)Type = Information Searches in all logs for log events in which the value in column Type is the term Informatio
n.
Type != Information Searches in all logs for log events in which the value in column Type is not the term Inform
ation.
Type contains Information Searches in all logs for log events in which the value in column Type contains the term Info
rmation.
Type contains Informatio? Searches in all logs for log events in which the value in column Type contains the term Info
rmatio, followed by a single character.
Type contains Inform* Searches in all logs for log events in which the value in column Type contains the term Info
, followed by zero or more characters.
URL contains (/website/moe/html and Searches in all logs for log events in which the value in column URL contains the term /web
*_304_*) site/moe/html and a word which contains the text _304_.
error and method contains *java.lang* Searches for events containing error and in the log field method a word which contains the
text java.lang.
Note: A log field named method is required.
priority = FATAL Searches the log field priority for the value FATAL.
Note: A log field named priority is required.
message = NULL Searches the log field message for an empty value.
Note: A log field named message is required.
message != NULL Searches the log field message for a nonempty value.
Note: A log field named message is required.
error and message contains connection Searches for log events that contain error and the word connection in the log field messa
ge.
Note: A log field named message is required.
error and not (message contains Searches for log events that contain error and do not contain NullPointerException in the
NullPointerException) log field message.
Note: A log field named message is required.
lineNumber < 1000 Searches in all logs for log events in which the numeric value in column lineNumber is less
than 1000.
Note: A numeric log field named lineNumber is required. Additional numeric operators: >
= !=
lineNumber > 1000 AND lineNumber < 2000 Searches in all logs for log events in which the numeric value in column lineNumber is
greater than 1000 and less than 2000.
* in log.Application Searches in all logs that are named Application, for all log events.
Note: The * can be replaced with any valid search query.
* in log.NAME Searches in all logs that are named NAME, for all log events.
Note: The * can be replaced with any valid search query.
error or exception or fail* in Searches for log events containing error or exception or a term beginning with fail, in all
log.LOG_NAME_1, log.LOG_NAME_2, …, logs named LOG_NAME_1, LOG_NAME_2,..., LOG_NAME_N.
log.LOG_NAME_N
* in folder.NAME Searches in all folders that are named NAME, for all log events.
Note: The * can be replaced with any valid search query.
error or exception or fail* in Searches for log events containing error or exception or a term beginning with fail, in all
folder.FOLDER_NAME_1, folder.FOLDER logs that are under folders named FOLDER_NAME_1, FOLDER_NAME_2,..., FOLDER_N
_NAME_2, …, folder.FOLDER _NAME_N AME_N * in app.NAME Searches in all applications that are named NAME (provided the application is tagged), for
all log events.
Note: The * can be replaced with any valid search query.
error or exception or fail* in Searches for log events containing error or exception or a term beginning with fail, in all
app.APP_NAME_1, app.APP _NAME_2, …, logs that are under applications named APP_NAME_1, APP_NAME_2,..., APP_NAME_N
app.APP _NAME_N (provided the applications are tagged).
* in server.NAME Searches in all servers that are named NAME, for all log events.
Note: The * can be replaced with any valid search query.
error or exception or fail* in Searches for log events containing error or exception or a term beginning with fail, in all
server.SERVER_NAME_1, server.SERVER logs that are under servers named SERVER_NAME_1, SERVER_NAME_2,..., SERVER_
_NAME_2, …, server.SERVER _NAME_N NAME_N.
* in log.Application, log.System Searches in all logs that are named either Application or System, for all log events
* in log.Application in folder.Windows Event Searches in all logs that are named Application and are located under folders that are
Logs named Windows Event Logs, for all log events.
Note: All types of selectors can be combined, i.e. in log.NAME in server.NAME, in
folder.NAME in application.NAME, and more.
error or exception in folder.cloudappserver1 Searches in all folders named cloudappserver1 for all log events containing the term error
or the term exception.
ThreadId=00000027 in folder.cloudappserver1 Searches in all folders named cloudappserver1 for all log events with the exact value 000
00027 in the field ThreadId.
regexp: \d\d\d Regular expression usage – search for a 3 digit number.
not (url contains (.gif or .jpg or .png or .css or Search for URLs that don''t contain images, css files and javascript in the log access
.js)) in log.access
Augmenting Simple Search Results
XpoSearch enables you to refine your simple search results, so that you can dig deeper into the cause of events.
You can augment your simple search results in any or all of the following ways:
Filtering the resulting events, so that only events from specific logs, folders, applications, or servers are displayed
Refining the search, using a field value discovered by Analytics
Running a complex search on the results of the simple search, using the interesting fields that were detected by Analytics
Disabling Augmentation
By default, augmentation is enabled. However, you can choose to disable this option.
To disable augmentation:
In the Search Query Panel, click the Close Augmented Search icon.
The Augmented Search Pane closes.
The icon changes to the Open Augmented Search icon, enabling you to open the Augmented Search Pane at a later time.
Filtering Simple Search Results by Event Source
Events are organized in logs. Some of these logs are arranged in the system under folders, applications, and servers. XpoSearch enables you
to display search events from specific logs, or search events in logs that are under specific folders, applications, or servers. The Augmented
Search Pane displays under Isolate Results, the sources of the logs that have result events, and also displays in parentheses near each of these
sources, the number of logs originating from them. You can select to minimize the search results by filtering results to display only events coming
from specific logs, folders, applications, or servers.
Filtering by Log(s)
System events are arranged in the system in logs of various formats; each log is assigned an XpoSearch log name. XpoSearch enables filteringevents to display only those that come from specific logs. Filtering by logs is the most basic kind of search.
To filter simple search results by log(s):
1. In the Augmented Search Pane, under Isolate Results, select logs.
A table opens listing the names of the logs, the number of events coming from each log, and the percentage of events coming from each
log with respect to the number of events from all the logs.
2. Select the checkboxes of the logs whose events you want to display.
The original search query is updated: the original search text is enclosed in parentheses, and to that is appended in log.[log name 1],
log.[log name 2], ....
The graph, search summary panel, and events list are updated.
The number of logs selected out of the number available appears in parentheses near Logs in the Augmented Search Pane. Also, the
numbers of folders, applications, and servers are updated, to display the numbers of folders, applications, and servers that are sources
of the logs containing the filtered events.
You can also manually filter search events to display only those from specific logs, by typing the XpoSearch log names directly into the search
query following the search text using Simple Search syntax, and appending to it in log.[log name 1], log.[log name 2], ....for each log from which
you want to display events.
Filtering by Folder(s)
Events coming into the system can appear in logs under folders, which are arranged according to user or function, depending on how you built
your environment. XpoSearch enables filtering events to display only those that come from logs that reside under specific folders.
To filter simple search results by folder(s):
1. In the Augmented Search Pane, under Isolate Results, select folders.
A table opens listing the names of the folders, the number of events coming from each folder, and the percentage of events coming from
each folder with respect to the number of events from all the folders.
2. Select the checkboxes of the folders whose events you want to display.
The original search query is updated: the original search text is enclosed in parentheses, and to that is appended in folder.[folder name
1], folder.[folder name 2], ....
The graph, search summary panel, and events list are updated.
The number of folders selected out of the number available appears in parentheses near Folders in the Augmented Search Pane. Also,
the numbers of logs, applications, and servers are updated, to display the numbers of logs, applications, and servers that are sources
of the logs containing the filtered events.
You can also manually filter search events to display only those in logs from specific folders, by typing the XpoSearch folder names directly into
the search query following the search text, using Simple Search syntax, and appending to it in folder.[folder name 1], folder.[folder name 2], ....
for each folder from which you want to display events.
Filtering by Application(s)
Events coming into the system can appear in logs of applications. XpoSearch enables filtering events to display only those that come from logs
that belong to specific applications.
To filter simple search results by application(s):
1. In the Augmented Search Pane, under Isolate Results, select applications.
A table opens listing the names of the applications, the number of events coming from each application, and the percentage of events
coming from each application with respect to the number of events from all the applications.
2. Select the checkboxes of the applications whose events you want to display.
The original search query is updated: the original search text is enclosed in parentheses, and to that is appended in app.[app name 1],
app.[app name 2], ....
The graph, search summary panel, and events list are updated.
The number of applications selected out of the number available appears in parentheses near Applications in the Augmented Search
Pane. Also, the numbers of logs, folders, and servers are updated, to display the numbers of logs, folders, and servers that are sources
of the logs containing the filtered events.
You can also manually filter search events to display only those in logs from specific applications, by typing the XpoSearch application names
directly into the search query following the search text, using Simple Search syntax, and appending to it in app.[app name 1], app.[app name 2],
.....
Filtering by Server(s)
Events coming into the system can appear in logs of servers. XpoSearch enables filtering events to display only those that come from logs of
specific servers.
To filter simple search results by server(s):
1. In the Augmented Search Pane, under Isolate Results, select servers.
A table opens listing the names of the servers, the number of events coming from each server, and the percentage of events coming from
each server with respect to the number of events from all the servers.
2. Select the checkboxes of the servers whose events you want to display.2.
The original search query is updated: the original search text is enclosed in parentheses, and to that is appended in server.[server
name 1], server.[server name 2], ....
The graph, search summary panel, and events list are updated.
The number of servers selected out of the number available appears in parentheses near Servers in the Augmented Search Pane. Also,
the numbers of logs, applications, and folders are updated, to display the numbers of logs, applications, and folders that are sources
of the logs containing the filtered events.
You can also manually filter search events to display only those in logs from specific servers, by typing the XpoSearch server names directly into
the search query following the search text, using Simple Search syntax, and appending to it in server.[server name 1], server.[server name 2],
....
Refining Simple Search Based On Analytics
While a Simple Search runs, Analytics discovers problematic field values in the events, color-codes them in each event, and also displays a list of
these problematic field values in the Augmented Search Pane under Analytics Insight. The color-coded severity of each field value in the list
appears to the left of each field value in the list.
You can run a refined search on the resulting events, by selecting a discovered problematic value, and then doing any of the following:
Searching for events that include both the original search text and the discovered problematic field value (Append to query with AND)
Searching for events that include either the original search text or the discovered problematic field value (Append to query with OR)
Replacing the original search text with the discovered problematic field value (Replace query)
To refine simple search results based on Analytics:
1. In the Augmented Search Pane, under Analytics Insight, choose a field value to include in your refined search. On the bottom of the list,
you can click Load more to see more discovered field values.
A graph showing the distribution of events having the discovered field value is displayed, and below it, a menu with the following options:
Append to query with AND, Append to query with OR, or Replace query.
2. From the menu, select one of the options for refining the search.
The search query is automatically updated, and the search runs, displaying the resulting events.
Analyzing Search Results
Search queries run by XpoSearch return the following:
A graphical presentation of matching events over time, with the ability to see the distribution of events over the multiple log sources
A summary panel of the search results, with the ability to set the number of results per page and navigate to any page
For a simple search – the matching log events from all relevant logs
For a complex search – a table that summarizes the results of the complex search
Generating the Graphical Distribution of the Search Results
Running a search query returns a graph that shows the distribution of events over time. You can determine the display mode and contents of the
graph. The graph has drill-down functionality, enabling you to zoom into any time period, and run the same search on that time period (see Zoomi
ng In/Out of a Time Period). It also enables you to hover over a bar or line graph to see the source of events and drill down to see the exact
events in any log.
XpoLog Search enables you to generate a graph of the distribution of events in a bar chart (the default), line chart, or pie chart. These charts can
be displayed in different visualizations, using the toolbar icons and features.
Detected problems are displayed on the time axis of the graph, enabling you to augment the search with a problem (see Augmenting a Search
with Detected Problems).
Generating a Bar Chart
In a bar chart (also called a column chart), a bar appears at each point in time where events were found to match your search query. The height
of each bar is according to the number of events that occurred at the specific time. A bar does not appear at times when no events matching your
search query occurred.
A bar chart can be displayed in different visualizations:
Split View – At any point in time where events were found, a vertical bar appears for each log (default) in the system that is the source of
events. You can instead show the distribution of events for each application or server in the system, by selecting in the adjacent
Distribute By selection box, Applications or Servers. The number of bars at a certain time is equivalent to the number of logs (or
applications or servers) that were the source of events at that time.
Stack View – At any point in time where events were found, a horizontal bar appears for each log (default) in the system that is the
source of events. You can instead show the distribution of events for each application or server in the system, by selecting in the adjacent
Distribute By selection box, Applications or Servers. The number of stacked bars at a certain time is equivalent to the number of logs
(or applications or servers) that were the source of events at that time.
Summary View – the default; At any point in time where events were found, a single vertical bar appears for events from all log,
application, or server sources in the system.
To generate a Summary View bar chart:
In the Graph Toolbar, on the right, click the Bar Chart button.To generate a Split View bar chart:
In the Graph Toolbar, on the right, click the Bar Chart button, and on the left, click the Split View button.
Vertical color-coded bars appear parallel to each other for each log (default) in the system. You can instead show the distribution of
events for each application or server in the system, by selecting in the adjacent Distribute By selection box, Applications or Servers.
A legend appears on top of the graph, showing the color that represents each entity (log, application, or server).
To generate a Stack View bar chart:
In the Graph Toolbar, on the right, click the Bar Chart button, and on the left, click the Stack View button.
Horizontal color-coded bars appear parallel to each other for each log (default) in the system. You can instead show the distribution of
events for each application or server in the system, by selecting in the adjacent Distribute By selection box, Applications or Servers.
The legend appears on top of the graph, showing the color that represents each entity (log, application, or server).
Generating a Line Chart
A line chart shows how the number of events matching the search query changed from one point in time to the next.
A line chart can be displayed in different visualizations:
Split View – the default: A line appears for each log (the default) defined in the system. You can instead show a single line for each
application or server in the system, by selecting in the adjacent Distribute By selection box, Applications or Servers.
Summary View – A single line represents all entities in the system (logs, applications, and servers) that have events.
To generate a split view line chart:
In the Graph Toolbar, on the right, click the Line Chart button.
An individual line is drawn to show the distribution of events in each log (default) in the system. You can instead show the distribution of
events for each application or server in the system, by selecting in the adjacent Distribute By selection box, Applications or Servers.
A legend appears on top of the graph, showing the color that represents each entity (log, application, or server).
To generate a summary view line chart:
In the Graph Toolbar, on the right, click the Line Chart button, and on the left, click the Summary View button.
A single line is drawn to show the distribution of events in the entire system.
Generating a Pie Chart
A pie chart shows the distribution of events over the applications, logs, or servers of the system.
To generate a pie chart:
In the Graph Toolbar, on the right, click the Pie Chart button.
The distribution of events in each log (default) in the system is illustrated in a pie, with each portion of pie shaded in the color
representing the log, and in the size relative to the percentage of events. You can instead show the distribution of events for each
application or server in the system, by selecting in the adjacent Distribute By selectionbox, Applications or Servers.
Viewing the Distribution of Results in Logs
You can hover over any bar or line in your graph to see the number of matching events that were produced by each log. You can then click any
log in the chart, to view the log''s events in the log viewer under the XpoLog tab. There, you can see the same information that is displayed as free
text in the search result events, in column format.
Zooming In/Out of a Time Period
From the main graph or zoom-in graph (below the main graph), you can zoom into any time period, so that you can see a more detailed
breakdown of events over a smaller period of time. For example, a search that runs for a time period of seven days shows the distribution of
events that match the search criteria, per day. You can then zoom into any time period (day) to see the distribution of events during that day, and
you can zoom in further to see the distribution of events in a specific hour on that day. The zoom-in graph highlights the zoomed-in time-period
relative to the search context in the original time period of the search query.
At any point, you can zoom out to re-display the graphs resulting from the original search query.
To zoom into a time period:
1. On the time axis of the graph (either the main graph or zoom-in graph), hover on a time until a vertical line appears through the time.
2. Drag the vertical line left/right to the beginning/end of the time period.
The selected time period is highlighted in blue.
3. Release the mouse button.
The search on the zoomed-in time period runs and the results appear in the main graph.
The time period of the zoomed in area is highlighted in the Zoom-In graph below the main graph.
The Zoom Out button appears, enabling you to zoom out to the original display. The time period of the search is automatically changed
to Custom.
4. You can repeatedly zoom in (steps 1 to 3) to see a more detailed distribution of events.
To zoom out:In the graph, click the Zoom Out button.
The original graph is displayed for the time period that you selected for the search query. At this point, the Zoom Out button is no longer
displayed.
Using the Search Results Summary Panel
The Search Results Summary Panel is displayed above and below the search results area, enabling you to conveniently view a summary of the
results of the search, set the number of results per page, and directly navigate to any page in the search results.
Setting the Number of Results Per Page
By default, a search displays 25 results per page - in the case of a simple search, 25 events per page; in the case of a complex search, 25 results
per summary table. You can set the system to display a different number of results per page - either 10, 50, or 100.
To set the number of results per page:
In the Search Results Summary Panel, in the Results Number per Page textbox, select from the dropdown list the number of result to
display per page.
Navigating to a Page of Results
You can navigate to any page of search results directly from the Search Results Summary Panel.
To navigate to a page of search results:
1. In the Search Results Summary Panel, in the Page Selection Area, click the Previous matching events and Next matching events ico
ns to display the previous/next page numbers, until the page number that you want comes into view.
2. Click the desired page number.
The results on that page are displayed in the Search Results Area.
Analyzing Simple Search Result Events
Running a Simple Search displays all events that match the search query.
The following is displayed for each event:
The timestamp of the event, i.e. the date and time that the event occurred
The overall severity of the event, which is the severity of the highest problem found in the event (color-coded; high, medium, low, or none)
(provided that Analytics is enabled)
The fields and field values of the event
The log, server, and applications of the event.
In each event, the text that you searched for is highlighted in yellow. In addition, provided that Analytics is enabled (the system default), the font of
the searched text, is colored according to its severity. Also, all field values that Analytics analyzes as being problematic, are color-coded in the
event, according to their severity.
Severities are color-coded as follows:
Red – high severity problem
Orange – medium severity problem
Yellow – low severity problem
Hovering over any event displays a menu which enables you to open the Analytics of the event, or view the event in the log viewer.
You also have the option to expand/collapse all events, or alternately, to hide/show the Analytics of all events.
Expanding/Collapsing Events
Events that span over more than a single line or are very long, are followed by an Expand Events icon.
XpoSearch enables you to view the detailed information of a single event, or to open at the click of a button, all events that have more than a
single line or very long definitions. This feature makes it possible for you to expand an event to trace its cause.
To expand all events:
In the Events toolbar, click the Expand Events icon.
The entire event information opens. The toolbar icon changes to the Collapse Events icon, enabling you to later close the event
information.
To expand a single event:
At the end of an event that has an Expand Event icon, click the icon.
The detailed information of the event opens. The icon changes to the Collapse Event icon, enabling you to later close the event
information.
To collapse all events:
In the Events toolbar, click the Collapse Events icon.The detailed information of all events close. The toolbar icon changes to the Expand Events icon, enabling you to later reopen the
detailed information of all the events.
To collapse a single event:
At the end of the event that has a Collapse Event icon, click the icon.
The detailed information of the event closes. The icon changes to the Expand Event icon, enabling you to later reopen the detailed
information of the event.
Showing/Hiding Analytics of All Events
By default, while XpoSearch searches for all events that match your search query, it also performs Analytics on all events, color-coding the
searched field values according to their severity, displaying the severity of the events, and suggesting additional potentially problematic field
values in the events, and color-coding them in the events. At the click of a button, you can hide Analytics results.
To hide Analytics of all events:
In the Events toolbar, click the Hide Analytics icon.
The severities of the events are not displayed, and problematic field values are not color-coded. The toolbar icon changes to the Show
Analytics icon, allowing you to show Analytics at a later time.
To show Analytics of all events:
In the Events toolbar, click the Show Analytics icon .
The severities of the events are displayed, and problematic field values are color-coded. The toolbar icon changes to the Hide Analytics
icon, allowing you to hide Analytics at a later time.
Viewing the Analytics of an Event
XpoSearch performs Analytics on all events resulting from a search, and enables you to navigate to the detailed Analytics of any event.
To view the Analytics of an event:
1. Navigate to the page which contains the event whose Analytics you would like to view, and hover over the event.
A menu appears to the right of the event.
2. In the menu, select Analytics.
The Analytics page for this event opens under the Analytics tab. See XpoLog Analytics for a detailed explanation of this screen.
Viewing an Event in the Log Viewer
You can view the details of any event in the log viewer. The log viewer displays the same fields as those displayed in the result event, but in an
organized tabular format. Also, in the log viewer you can see the events that preceded and followed the selected event, helping you understand
what led to the event.
To view an event in the log viewer:
1. Navigate to the page which contains the event that you want to view in the log viewer, and hover over the event.
A menu appears to the right of the event.
2. In the menu, select Log Viewer.
A notification box opens, informing that the system is loading the log. The Popup Blocked notification box appears if popups are blocked.
If so, click Continue.
The Log Viewer opens under the XpoLog tab. See Log Viewer for a detailed explanation of this screen.
Analyzing Complex Search Results
A Complex Search generates a summary table which shows the number of events per field in the query.
Corresponding to the complex search parameters, the results summary table presents data such aggregation values, time, max, min or other
values per the function executed in the complex search.
The values are clickable - click on each aggregated event in order to drill down to the relevant log records.
In order to understand the results data table it is imperative to understand the complex search, how data was filtered, what time interval the
system computed and what functions and group by field were used.
Performing a Complex Search
Complex search queries can be run on search results for advanced computation and reporting on matching log events.
During a simple search, XpoLog extracts all the fields from the events, and displays in the Augmented Search Pane, under Interesting Fields, wh
at it finds to be the most interesting fields. You can run a complex search on the results of the simple search, by clicking any of these interesting
fields and selecting one of the available functions that can be performed on the field.
The results of a complex search are presented in tabular format, as opposed to the simple search, which displays each and every event that
meets the search criteria.
The default complex search that is triggered by selecting a filed is based on the search query that was executed, grouping by the search resultsby the selected interesting filed.
To perform a complex search:
1. In the Augmented Search Pane, under Interesting Fields, choose a field to include in your complex search. On the bottom of the list,
you can click Load more to choose from other interesting fields.
A menu of functions that can be performed on the field is displayed.
2. From the menu, select the function to perform on the interesting field.
The search query is automatically updated, transforming the simple search to a complex search, and the search runs, displaying a result
summary table.
Alternatively, you can type a complex search query into the Search Query Panel (see Complex Search Syntax Reference).
Complex Search Examples
The following table contains examples of complex search queries:
Query Explanation
error | first 10 Searches system log events for error, and shows the first 10 results only.
error | count | group by ext.log Searches the system log events for error, and shows error count per log.
error | count | group by ext.log | order by Searches the system log events for error, and shows error count per log in ascending order
count asc of count.
error | count | group by ext.log | order by Searches the system log events for error, and shows error count per log in descending
count desc order of count.
* in log.log4J | count | group by priority Runs on all events in the log log4J and aggregates unique values in the log field priority.
Note: A (log field named priority is required.
* in log.log4j log | count | group by priority Same as the previous query example, with the exception that it “renames” the count column
| display count as Unique Count to Unique Count.
* in log.access log | count | group by status Runs on all events in the log access log and aggregates unique values in the log field statu
s.
Note: A (log field named status is required.
* in log.access log | count | group by url Runs on all events in the log access log and aggregates unique values in the log field url.
Note: A (log field named url is required.
* in log.access log | avg bytes sent | group Calculates the average of the log field bytes sent for each unique remote host in the log ac
by remote host cess log.
Note: Log fields with the names bytes sent and remote host are required. Also, bytes
sent should be numeric so that the average of its values can be calculated.
* in log.access log | avg bytes sent | group Calculates the average of the log field bytes sent for each unique remote host in the log ac
by remote host | display avg in volume cess log, and formats the value of the bytes sent average to volume format instead of a
format regular numeric value.
Note: Log fields with the names bytes sent and remote host are required. Also, bytes
sent should be numeric so that the average of its values can be calculated.
* in log.access log | avg bytes sent | group Same as the previous query example, with the exception that in this example, volume
by remote host | display avg in volume format receives the parameters (INPUT_VOLUME_UNIT, DISPLAY_VOLUME_UNIT).
format (bytes, MB)
XpoLog treats the value in the log field bytes sent as bytes, and presents the result in
Megabytes.
Available volume units: B, KB, MB, GB
* in log. IIS Log Test | avg time-taken | group Calculates the average of the log field time-taken for each unique c-ip in the log IIS Log
by c-ip | display avg in time format Test, and formats the value of the time-taken average to time format instead of displaying a
regular numeric value.
Note: Log fields with the names bytes sent and remote host are required. Also, bytes
sent should be numeric so that the average of its values can be calculated.* in log. IIS Log Test | avg time-taken | group Same as the previous query example, with the exception that in this example, time format r
by c-ip | display avg in time format (ms,minu eceives the parameters (INPUT_TIME_UNIT, DISPLAY_TIME_UNIT).
tes)
XpoLog treats the value in the log field time-taken as milliseconds, and presents the result
in minutes.
Available volume units: microsec, ms, sec, min, hour, day
* in log.access log | count, max bytes sent, Calculates the number of occurrences, as well as the maximum, minimum, and
min bytes sent, avg bytes sent | group by average values of the log field bytes sent, for each unique remote host in the log access
remote host log.
Note: Log fields with the names bytes sent and remote host are required. Also, bytes
sent should be numeric so that the average of its values can be calculated.
error or exception | count | interval 1 day
Counts the number of errors and exception in a log on a daily basis (i.e. the number of
errors/exceptions per day).
Running a Recent or Popular Search
XpoLog saves the n most recent searches, as well as the n most popular searches. This enables you to run a recent search, without having to
retype the search query.
To run a recent/popular search:
1. In the Search Query Panel, click the Search Options icon.
A window with tabs for various search options opens.
2. Click the Search History tab.
The searches that you have conducted recently, as well as popular searches, are listed.
3. Click the search that you want to run.
The search runs and results are displayed.
Saving a Search
XpoSearch enables users with sufficient permission to save a search so that it can be quickly run in the future. This is a very handy feature for
useful or interesting searches that you expect to run again, as it saves you the time of reformulating and retyping the search string. For a detailed
explanation on how to run a saved search, see Running a Saved Search.
Saving a search is very quick and simple – the minimum that is required is for you to define a name for the search; the search string, which is also
required, is automatically input by the system. You also have the option of defining the following:
The time range for which the search is to be run (relative to the time that the search is initiated), such as Last 3 days. If the time range is
not defined, the saved search runs on the time range selected in the Search Query Panel at the time that you ran the saved search.
A description of the search.
An indication of whether or not this search is to be included in the analysis that the Analytics engine performs on logs. A severity of None
excludes this search from the analysis that the Analytics engine performs on logs. A severity of Low, Medium, or High indicates that the
Analytics engine is to include this search in the analysis it performs on logs.
To save a search:
1. In the Search Query Panel, click Save Search.
2. The Save Search dialog box opens. The search query is automatically input into Search Term.
3. In Name, type a meaningful name for the search (mandatory).
4. In Description, type a description of the search (optional).
5. In Time Range, select from the dropdown list the period of time relative to the current date that the search is to be conducted (optional).
Leave blank to XpoSearch to run the saved search on the time range that is displayed in the Search Query Panel at the time that the
saved search runs.
6. In Severity, select from the dropdown list a severity of Low, Medium, or High if you want the Analytics engine to include this search in
the analysis it performs on logs. Otherwise, leave the severity at None, to exclude this search from the analysis that the Analytics engine
performs on logs.
7. Click Save.
The search is saved.
Deleting a Saved Search
You can delete at any time a search that you previously saved.
To delete a saved search:
1.1. In the Main Menu, select Administration, and in the sub-menu that appears, select Saved Searches.
A list of saved searches is displayed.
2. Select the search to delete, and click the Delete button.
A Delete Confirmation box is displayed.
3. Click Yes to confirm deletion.
The search is deleted, and no longer appears in the list of saved searches.
Editing a Saved Search
You can edit at any time the name, search query, time period, or description of a previously saved search.
To edit a saved search:
1. In the Main Menu, select Administration, and in the sub-menu that appears, select Saved Searches.
A list of saved searches is displayed.
2. Select the search to edit, and click the Edit button.
The Save Search dialog box is displayed.
3. Modify the definition of the search, as required (see Saving a Search), and then click Save.
The modifications to the saved search are saved.
Running a Saved Search
You can run searches that have previously been saved by simply opening up a list of saved searches and clicking the one that you want to run.
Alternately, if you know the name of the saved search, in the search query, you can type search. followed by the search name.
Method 1
To run a saved search:
1. In the Search Query Panel, click the Search Options icon.
A window with tabs for various search options opens.
2. Click the Saved Searches tab.
The searches that you have saved in the past are listed.
3. In the Saved Searches list, click the search to run.
The search is run and results are displayed.
Method 2
To run a saved search:
In the Search Query Panel, type search.search_name.
The search is run and results are displayed.
Note: As you begin typing "search", a dropdown list with the names of all saved searches opens. Typing more characters of the saved search
name narrows down the list. You can either complete typing the name, or at any point, select the desired saved search name.
Saving a Search as a Gadget
XpoLog Search enables users with sufficient permission to save any search as a gadget.
Saving a search as a gadget requires you to define:
A name for the gadget
The view of the gadget – chart or table
The dashboard under which the gadget is to be placed – an existing or new dashboard
The app under which the dashboard is to be placed - an existing or new app
The search string and time period of the search are automatically input by the system from the search query of the Search being saved. You can
change the time period to have the gadget run the search on events from a different range of time.
To save a search as a gadget:
1. In the Search Query Panel, click Save Gadgets.
2. The Save Gadget dialog box opens. The search query is automatically input into Search Query.
3. In Gadget Title, type a meaningful name for the gadget (mandatory).
4. In Time Range, leave the time range of the search that ran (default), or select a different time range.
5. In Gadget View, select Chart or Table.
6. In App, select one of the following:
Existing – to place the gadget in an existing app. In the adjacent drop-down list, select the name of the app.
New – to place the gadget in a new app. In the adjacent box, type the name of the new app.
7. In Dashboard, select one of the following:
Existing – to place the gadget in an existing dashboard. In the adjacent drop-down list, select the name of the dashboard.
New – to place the gadget in a new dashboard. In the adjacent box, type the name of the new dashboard.
8. Do one of the following:
Click Save.The Gadget is saved. The Search is displayed.
Click Save and View Dashboard.
The Gadget is saved, and the dashboard under which it has been saved, is opened.
Saving a Search as a Monitor
XpoLog Search enables users with sufficient permission to save any search as a monitor so that it is run automatically by the system at scheduled
intervals. Also, the results of the monitor are automatically sent to the emails of recipients that you specify. This saves users the time that it takes
to set up the search and send the results to recipients.
Saving a search as a monitor is very quick and simple – the minimum that is required is for you to define a name for the monitor; the search
string, which is also required, is automatically input by the system. You also have the option of defining the following:
The frequency at which the monitor is to be run. This should not be specified, if you want to manually run the monitor.
The emails of recipients to whom the system should automatically send alerts resulting from the monitor.
To save a search as a monitor:
1. In the Search Query Panel, click Save Monitor.
2. The Save Monitor dialog box opens. The search query is automatically input into Search Query.
3. In Name, type a meaningful name for the search (mandatory).
4. In Schedule, do one of the following:
Leave blank if you intend to run the monitor manually.
Select the frequency at which the system should automatically run the monitor: Select from the dropdown list the unit of time (Se
conds, Minutes, or Hours) and type the number of units.
5. In Email, type the email addresses to which to send alerts. Separate email addresses with a comma.
6. Click Save.
The monitor is saved.
Exporting a Search to a PDF
You can save search results in a PDF file for later reference.
To export a search to a PDF:In the Search Query Panel, click Export to PDF. Notification appears informing you that the
PDF is being generated. Then, the PDF of the search results is opened.
Note: If pop-ups are blocked, the system sends you a notification. In this case, click Continue in the notification message to open the PDF.
Exporting a Search to a CSV
You can save search results in a CSV file for later reference.
To export a search to a CSV:In the Search Query Panel, click Export to CSV. Notification appears informing you that the CSVis being generated.
Then, the CSV of the search results is opened.
Note: If pop-ups are blocked, the system sends you a notification. In this case, click Continue in the notification message to open the CSV.
Complex Search Syntax Reference
A complex search is used to perform one or more complex operations on simple search results, so that search results can be summarized in a
table for convenient analysis, according to criteria that you choose. The basis of the complex search structure is the pipe character (|), which
indicates to XpoSearch to input the results of the search preceding the pipe to the complex search following the pipe.
The general syntax of a complex search is as follows:
search query | [function | [group] | [view]] ([function | [group] | [view]])...
where,
search query – a simple search
funtion – an operation that is applied on the results of the search preceding the pipe. Available functions: count, min, max, avg, sum, time, star
t time, end time, country, country code, city, region, execute
group – grouping of results by a specific group type, such as columns, logs, servers, files, or applications. Available Group operations: group by,
interval
view – specifies how to display the results. Available View operations: first, last, order by, display, where, display only, geoip, asc, desc, dis
play first 10
Grouping can only be according to a single group type. However, the group type can have a single or multiple variables.
A function must precede grouping, although it does not necessarily have to immediately precede it – view can come between the functio
n and group command.
There can be multiple View types.
The Complex Search Syntax is iterative.
In the following example, there is one function (count), one grouping (group by) by two variables (event, user), and three views (order by ...desc, first, display as):
in app.windows event logs | count | group by event, user | order by count desc | first 10 | display count
as Our Example
This chapter provides you with a reference to all the search commands available for your use in a complex search, including their syntax,
description, and examples of use. You can also build complex search queries using a combination of these search commands. Complex search
queries that run in the XpoSearch console, can be visualized as gadgets in XpoLog Dashboards.
Use case examples of such commands are provided in Complex Search Examples.
avg
Synopsis
Calculates the average of the values in a specified column of the search query results.
Syntax
avg [column_name]
Required Arguments
column_name
Syntax:
Description: The name of a column header that has numeric values
Optional Arguments
None
Description
For each event in the search query results that has the specified column_name with a numeric value, adds the value to the cumulative sum,
and when it has reached the last event, divides the cumulative sum by the number of events to get the average.
Examples
Example 1:
* in log.access | avg Bytes Sent
From the events in access log, returns the average of the values in column Bytes Sent.
Example 2:
http in log.iis log | avg time-taken | group by sc-status
From the events in log.iss log that contain http in their column values, returns the average of the values in column time-taken, grouped
according to the value of the sc-status column.
avgif
Synopsis
Calculates the average of the values in a specified column of the search query results based on a query to be executed on the record.
Syntax
avgif [column_name] “[search_query]”
Required Arguments
column_name
Syntax:
Description: The name of a column header that has numeric valuessearch_query
Syntax:
Description: The search query to be executed on the record
Optional Arguments
None
Description
For each event in the search query results that has the specified column_name with a numeric value, adds the value to the cumulative sum,
and when it has reached the last event, divides the cumulative sum by the number of events to get the average.
Examples
Example 1:
* in log.access | avgif Bytes Sent "status=200"
From the events in access log, returns the average of the values in column Bytes Sent only if the value of column status is 200.
Example 2:
* in log.iis log | avgif time-taken "cs-host contains http" | group by sc-status
From the events in log.iss log that contain http in their cs-host column, returns the average of the values in column time-taken, grouped
according to the value of the sc-status column.
city
Synopsis
Displays the city names extracted from the IP address column in the search result events.
Syntax
city [IP_address_column_name]
Required Arguments
IP_address_column_name
Syntax:
Description: The name of the column header that has IP address values.
Optional Arguments
None
Description
For each event that has the specified IP_address_column_name with an IP address value, extracts the city name from the IP address, using an
internal database.
Examples
Example 1:
* log.access | city IPaddress1
For each event in log access, extracts the city name from the IP address in column IPaddress1.
count
Synopsis
A function that counts the number of search result events.
Syntaxcount
Required Arguments
None
Optional Arguments
None
Description
When used following the initial simple search query, returns the number of events resulting from the search. When used iteratively, counts the
number of results returned from the complex search preceding the pipe.
Examples
Example 1:
* in log.access | count
Returns the number of events in log access.
Example 2:
* in log.application | count | group by event | order by count desc
Returns the number of each event in the log application in a descending order
Example 3:
* in log.access | count | group by remote host | interval 15 minutes | count | interval 15 minutes |
display count as number of users
Returns the distinct numbder of users in 15 minutes intervals in the log access
countif
Synopsis
A function that counts the number of search result events based on a query to be executed on the record.
Syntax
countif “[search_query]”
Required Arguments
search_query
Syntax:
Description: The search query to be executed on the record
Optional Arguments
None
Description
When used following the initial simple search query, returns the number of events resulting from the search. When used iteratively, counts the
number of results returned from the complex search preceding the pipe.
Examples
Example 1:
* in log.access | countif status=200
Returns the number of events containing status 200 in log access.
Example 2:* in log.application | countif message contains error | group by event | order by countif desc
Returns the number of each event containing error in the log application in a descending order.
country
Synopsis
Displays the country names extracted from the IP address column in the search result events.
Syntax
country [IP_address_column_name]
Required Arguments
IP_address_column_name
Syntax:
Description: The name of the column header that has IP address values.
Optional Arguments
None
Description
For each event that has the specified IP_address_column_name with an IP address value, extracts the country name from the IP address, using
an internal database.
Examples
Example 1:
* log.access | country IPaddress1
For each event in log access, extracts the country name from the IP address in column IPaddress1.
country code
Synopsis
Displays the country codes extracted from the IP address column in the search result events.
Syntax
country code [IP_address_column_name]
Required Arguments
IP_address_column_name
Syntax:
Description: The name of the column header that has IP address values.
Optional Arguments
None
Description
For each event that has the specified IP_address_column_name with an IP address value, extracts the country code from the IP address, using
an internal database.
Examples
Example 1:
* log.access | country code IPaddress1 For each event in log access, extracts the country code from the IP address in column IPaddress1.
display
Synopsis
Changes the display names, formats, and/or time units of column(s) in the summary table resulting from the complex search(es) preceding the
pipe character.
Syntax
display [Result_Column_Name] (as [New _Column_Name]) (in [Format_Type]
format)(["Input_Unit"],)(["Output_Unit"]) (, [RESULT_COLUMN_NAME] (as [NEW_COLUMN_NAME] )…)
Required Arguments
Result_Column_Name
Syntax:
Description: The name of the column header in the summary table resulting from the complex search, whose name, format, or output unit you
want to change.
Optional Arguments
New_Column_Name
Syntax:
Description: The new display name of the column header in the summary table.
Format_Type
Syntax: number, simple, time, date, volume, regexp, or expression
Description: The display format of the column header values in the summary table. See format.
Description
This function is used to change the display mode of any of the column names and/or values in the summary table resulting from the Complex
Search, by:
Changing the column name to a new column name.
Displaying the column values in a specified format.
Displaying the column values in a specified output unit.
Assuming that the input unit of the column values is the specified unit, and converting it to the specified output unit.
The display of several columns in the summary table of a complex search can be changed by placing them in a comma-separated list.
Note: in case the same function is applied on different fields it is possible to set the display in the function activation area itself in the query by
specifying FUNCTION COLUMN_NAME AS DISPLAY_NAME. See example 3.
Examples
Example 1:
* in log.access | count , avg Bytes Sent | group by url | display avg as Average Bytes in volume format
For each URL in the access log events, show the number of log events and the average of the Bytes Sent column. In the table, replaces the avg
header with Average Bytes, and shows the values in volume format in Bytes (default).
Example 2:
* in log.access | avg time taken | display avg in time format(“SEC”,”MIN”)
In the access log events, calculates the average of the time taken column values, assumes that the input value is in seconds, and converts and
displays it in minutes.
Example 3:
* in log.access | avg time taken as Average Time Taken, avg Bytes Sent as Average Bytes Sent
In the access log events, calculates the average of the time taken and bytes sent columns values, settings a result column name to each one
in the function definition level.
display only
Synopsis
Specifies the names of the column names resulting from the complex search that are to be displayed in the summary table, and optionally defines
new display names, formats, and/or time units for these column name(s) in the summary table.
Syntax
display only [Result_Column_Name] (as [New _Column_Name]) (in [Format_Type]
format)(["Input_Unit"],)(["Output_Unit"]) (, [RESULT_COLUMN_NAME] (as [NEW_COLUMN_NAME] )…)
Required Arguments
Result_Column_Name
Syntax:
Description: The column_names resulting from the complex search that you want to include in the displayed summary table.
Optional Arguments
New_Column_Name
Syntax:
Description: The new display name of the column header in the summary table.
Format_Type
Syntax: number, simple, time, date, volume, regexp, or expression
Description: The display format of the column header values in the summary table. See format.
Description
You may not be interested to display all the columns in a summary table that results from a complex search. Using display only, you can
specify the column names that are to appear in the summary table, placing them in a comma-separated list.
This function can also be used to change the display mode of any of these column names and/or values, by:
Changing the column name to a new column name.
Displaying the column values in a specified format.
Displaying the column values in a specified output unit.
Assuming that the input unit of the column values is the specified unit, and converting it to the specified output unit.
Examples
Example 1:
* in log.access | count , avg Bytes Sent | group by url | display only avg as Average Bytes in volume
format
For each URL in the access log events, calculates the number of log events and the average of the Bytes Sent column. In the resulting table,
only shows the avg column, replacing the avg header with Average Bytes, and shows the values in volume format in Bytes (default).
Example 2:
* in log.access | avg time taken | display only avg in time format(“SEC”,”MIN”)
In the access log events, calculates the average of the time taken column values. In the resulting table, only shows the avg column,
and assumes that the input value is in seconds, and converts and displays it in minutes.
dist
SynopsisDisplays the distribution over time of all values under the specified column(s) as appear in the log(s).
Syntax
dist [column_name]
Required Arguments
column_name
Syntax:
Description: The name of a column header that its values should be listed
Optional Arguments
None
Description
Displays the distribution over time of all values under the specified column(s) as appear in the log(s)
Examples
Example 1:
* in log.application | dist event
Returns a distribution over time of all values under the Event in the log Application
Example 2:
* in log.application | dist event, type
Returns a distribution over time of all values under the columns Event and Type in the log Application
end time
Synopsis
Displays the time of the last event in the group.
Syntax
end time
Required Arguments
None
Optional Arguments
None
Description
Shows the unformatted time of the last event in the group resulting from the search query. Should be formatted and displayed in date format.
Examples
Example 1:
* in log.access | end time | display end time in date format
Finds the time of the last event in log acccess, and in the summary table, displays this time in date format under the end time column.
executeSynopsis
Executes a custom complex computation on search query results.
Syntax
execute [expression] (as result1, result2)
Required Arguments
expression
Syntax: mathematical expression
Description: Performs on the search results, a mathematical expression that the user formulates using the execute search syntax.
Optional Arguments
result1, result2
Syntax: .
Description: If the results that the executed expression returns are expected to go into more than one column, the names of the columns
preceded by as must be placed in parentheses following the expression.
Description
Executes on each event in the search query, an expression. If the returned results go into more than one column, they are entered under the
columns whose names appear in parentheses after the expression.
Examples
Example 1:
* in log.access | execute if (total == NULL) THEN (total = 0); if (column.bytes\ sent != NULL &&
column.bytes\ sent != "-") THEN (total = total + column.bytes\ sent);total | group by status | order by
value desc
Computes the total of the bytes sent column of the events in log access per status, and displays the total of each status in descending order of
the total value.
Example 2:
* in log.application_log | count, sum col_name| interval 1 hour | execute result = column.count * 100 /
column.sum ; result | interval 1 hour
Computes the sum of a value in the field col_name in an hourly basis, and computes the percentage of that value out of the total number of
events during that time.
Example 3:
* in log.iis log | avg time-taken | group by cs-uristem | execute if (count1 == NULL) THEN (count1 = 0);if
(count2 == NULL) THEN (count2 = 0);if (count3 == NULL) THEN (count3 = 0);timetaken = column.avg; if
(timetaken > 100 && timetaken < 300) THEN (count1 = count1 + 1);if (timetaken >= 300 && timetaken < 400)
THEN (count2 = count2 + 1);if (timetaken >= 400 && timetaken < 500) THEN (count3 = count3 + 1);map =
mapput(map,"100",count1);map = mapput(map,"300",count2);map = mapput(map,"400",count3);map as type,value |
order by type
Computes the different type of URLs that their average time took between 100-300, 300-400 and 400-500 milliseconds based on the time-taken
log field.
Example 4:
* in log.process | avg memory | interval 10 minutes | execute MB = column.Avg; if (result == NULL) then
(result=""); diff=0; if (previous != NULL && (MB - previous) > 100) then (diff = MB - previous); key=""; if
(diff > 0) then (key = previousTime + ";" + column.time + ";" + format(previous) + ";" + format(MB)); if
(diff > 0) then (result = mapput(result, key, format(diff))); previous = MB; previousTime= column.time;
result as Start of Time Slot, End of Time Slot , Min Memory, Max Memory, Memory DifferenceComputes the difference of an average value in more than 100 units in a 10 minutes time slot - for example increase of more than 100 MB in
memory in less than 10 minutes based on performance log.
Example 5:
* in log.LOG_NAME | execute if (total == NULL) then (total = 0); if (count == NULL) then (count = 0);if
(column.COLUMN_NAME == COLUMN_VALUE) then (count = count + 1); total = total + 1;(count/total)*100
Computes the percentages of the value COLUMN_VALUE in the log column COLUMN_NAME out of all events in the log LOG_NAME
Same query with 10% (for example) threshold for monitoring. I.E. if the value COLUMN_VALUE in the log column COLUMN_NAME out
of all events in the log LOG_NAME is greater than 10% it will return a result:
* in log.LOG_NAME | execute if (total == NULL) then (total = 0); if (count == NULL) then (count = 0);if
(column.COLUMN_NAME == COLUMN_VALUE) then (count = count + 1); total = total + 1;(count/total)*100 | where
value > 10
execute search syntax
Custom Operators
Type Operator Example
Numerical Operators +-*/:basic operators (-1 + 50*2 ) / ( 2^4 )
% : Modulo operator
^ : Power operator
Boolean operators ~, xor : operators
&&, and : And operators
||, or : Or operators
!, not : Not operators
< : less operator
> : great operator !(A && (B < 10)) | NOT ( A XOR ( B equals C ) )
<= : less or equal operator A != 2 || B > 2
>= : great or equal operator "string1" == "string2"
==, equals : equal operators A or B
!=, <> : not equal operators A or ( B <> C )
String operators == : 2 strings equals "string1" == "string2" : false
!= : 2 strings not equals "string1" + "a" : "string1a"
<> : 2 strings not equals "abc" > "aaa" : true
< : The first string less lexically than "zyx" < "bcd" : false
the second one
> : The first string great lexically than
the second one
<= :The first string less or equals lexically than
the second one
>= : The first string great or equals lexically than
the second one
+ : Concat string
List operators + : Concat two lists (1,2)+(3,4) = (1,2,3,4)
- : Substract a list to another one (1,2) + 3 = (1,2,3)
in : Test if an element is inside a list 3+(1,2)=(1,2,3)
(1,2,3,4)-(3,4)=(1,2)
(1,2,3,4)-3=(1,2,4)
2 in (1,2,3)=true
4 in (1,2,3)=false
Other operators = : set a variable operator A = [ 2 - A ] * 2
[] : absolute value
² : power 2 operator
% : Percent operators
2²
10%=0.1Conditional operators if then if ( A > 2 ) then ("Ok")
if then else if ( A <=2 ) THEN (B=3) else (B=4)
Custom functions Described below
Custom Functions
Name Description Function signature
format Formats decimal number. Default format is #.##. format (n) - returns the decimal number n in the
format ''#.##''.
number Parses string to number (double). number (s) - parses the string v as a double.
timeformat Formats a number as date/time string. Default time format is "MM/dd/yyyy timeformat (n,f) - formats the double value n to the
HH:mm:ss.SSS". date/string f
mapput Puts a key/value pain in a map. If the given map was not declared prior to mapput (m,k,v) - returns the new map m after
this function, then this function creates it. putting the value v using the key k.
Note: the function returns the result map. mapput(m,m2) - Returns the new map m after
putting the map m2 in it.
mapget Returns a value from a map using a key. mapget (m,k) - returns the value in m to which the
key k is mapped.
mapkeys Returns all the map''s keys. mapkeys (m) - returns all the keys of map m.
mapvalues Returns all the map''s values. mapvalues (m) - returns all the values of map m.
mapremove removes a mapping from a map using a key mapremove (m,k) - removes from the map m the
mapping for key k, if present.
listget returns the element in a list at a specified position listget (l,i) - returns the element at position i in list l.
Custom Aggregation Functions
Name Description Function Signature
aggAvg returns the aggregated average value of a aggAvg (p) - returns the aggregated average value of given parameter p.
parameter.
aggSum returns the aggregated sum value of a aggSum (p) - returns the aggregated sum value of given parameter p.
parameter.
aggSum returns the aggregated sum value of a aggSum (p) - returns the aggregated sum value of given parameter p.
parameter.
aggMax returns the aggregated max value of a aggMax (p) - returns the aggregated max value of of given parameter p.
parameter.
aggMin returns the aggregated min value of a aggMin (p) - returns the aggregated min value of given parameter p.
parameter.
mapAggAvg returns aggregated avg map mapAggAvg (k,v) - returns map that each value v is aggregated avg result
for given key
k.
mapAggSum returns aggregated sum map mapAggSum (k,v) - returns map that each value v is aggregated sum result
for given key k.
Custom Basic Functions
random - A random value from 0 to 1.
strlen - Compute the length of a string.
sqrt - Square root .avg - The average of the arguments.
cos - Cosine with a radian argument.
acos - Arc cosine with a radian argument.
asin - Arc sine with a radian argument.
atan - Arc tan with a radian argument.
exp - Compute the euler''s number e raised to the power of the argument.
floor - the largest (closest to positive infinity) double value that is less than or equal to the argument and is equal to a mathematical
integer.
int - Convert the double argument to integer.
logn - Natural logarithm in n base : logn( BASE, VAL).
log 10 - Natural logarithm in 10 base.
log - Natural logarithm in e base.
pow - The first argument power the second one.
prod - The product of the arguments.
round - The closest long to the argument.
sin - Sine with a radian argument.
tan - tan with a radian argument.
degTorad - Convert angle from degrees to radians.
radTodeg - Convert angle from radians to degrees.
Complete syntax can be found here: http://www.japisoft.com/formula/doc/index.html
first
Synopsis
Used to display the first specified number of events resulting from a Simple Search, or the first specified number of summary table entries
resulting from a Complex Search.
Syntax
first [number_of_results] for each [group]
Required Arguments
number_of_results
Syntax:
Description: The number of first search results to display
Optional Arguments
for each group
Syntax: for each
Description: The column name on which the first specific number should display.
Description
When used immediately following a Simple Search query, returns the specified number of first events resulting from the search. When used
immediately following a Complex Search query, returns the specified number of first entries from the summary table resulting from the search.
Examples
Example 1:
* in log.access | first 32
Returns the first 32 events from access log.
Example 2:
http in log.iis log| max time-taken | group by c-ip | first 21
Returns the max time-taken value from events in log.iis log having http in their column values, for the first 21 c-ip values only.
Example 3:error in log.xpologlog | count | group by class, method | first 2 for each class
Returns the 2 methods that appeared most in each class in log.xpologlog log having error in their column values.
Example 4:
error in log.xpologlog | count | group by message | interval 1 hour | first 2 for each interval
Returns the 2 messages that appeared most in each 1-hour interval in log.xpologlog log having error in their column values.
format
Synopsis
Displays a specified column in the complex search summary table in a specified format. Can be used only with Display, Display only, and Grou
p by commands.
Syntax
in [format_type] format)(["Input_Unit"],)(["Output_Unit"])
Required Arguments
format_type
Syntax: number, simple, time, date, volume, regexp, expression or query
Description: The format in which to display the values of a specific column in the complex search summary table. For a time format_type, if no
unit appears after time format, XpoLog assumes that the column value is in milliseconds and displays it in the maximal possible unit (for example,
if the value is 2000, the output is 2 seconds; if the value is 120000, the output is 2 minutes, etc.).
Optional Arguments
"Input_Unit"
Syntax: Volume Units - B, KB, MB, GB; Time Units: microsec, ms, sec, min, hour, day
Description: The input unit of the format type.
"Output_Unit"
Syntax: Volume Units - B, KB, MB, GB; Time Units: microsec, ms, sec, min, hour, day
Description: The unit in which to convert the format type.
Note: If only one unit appears in the syntax, XpoLog assumes that it is the output unit, and that the input value is in milliseconds (for time) or
bytes (for volume). If no unit appears in the syntax, XpoLog outputs the log value in milliseconds (for time) or bytes (for volume).
Description
Displays the column values in the specified format, assuming the default input and output units, if they are not specified, and converting to a
specific output unit from a specific input unit, if specified.
Text can be formatted into the following format types:
number – formats the text in the column to number format; (“#.##”) – the decimal format of the number
simple – displays columns in difference format: (“column.name1 – column.name2”) – replace the columns with the values from the result
time – displays the value in a time format of the default unit or of the indicated (“[OUTPUT_UNIT]”) , (“[INPUT_UNIT]
“,“[OUTPUT_UNIT]”) – displays the column in output format and uses the input unit In case it is different from milliseconds.
Time units: [microsec,ms,sec,min,hour,day]
date – displays the value in day format: (“[SIMPLE DATE FORMAT]”) – change the date format
volume – displays the value in volume format way: (“[OUTPUT_UNIT]”) , (“[INPUT_UNIT] “,“[OUTPUT_UNIT]”) – display the column in
output format and use input unit in case it is different from bytes.
Volume units: [B,KB,MB,GB]
regexp – use regexp to extract values from the data: (“[REGEXP]”) – display the first group that is found from the regular expression
expression – displays the column result after performing an expression on the original contents.
Display Column_Name in regexp format("REGEXP"), where REGEXP is the regular expression to be executed on the value
in Column_Name (“[EXPRESSION]”) – use an expression to calculate different result value
query - displays an aggregated result broke into groups based on search queries constraints.
...group by FIELD_NAME in query format
("SEARCH_QUERY_1","RESULT_NAME_2","SEARCH_QUERY_2","RESULT_NAME_N",..., "SEARCH_QUERY_N","RESULT_NAME_
N")
It is possible to use ''*'' at the end as a query to group the undefined results of the other queries:
status != NULL in log.access | count | group by status in query format ("status=200","VALID","*","ALL_THE_REST")exception - displays an aggregated result broke into groups based on number of lines in the stack trace.
...group by FIELD_NAME in exception format ("NUMBER_OF_LINES","SHOW_MESSAGE")
error in log.log4j log | count | group by message in exception format ("1","true")
replace – use replace to replace a value from the data with a custom value.
....group by FIELD_NAME in replace format
("REGEXP_1","REPLACE_TEXT_1",”REGEXP_2","REPLACE_TEXT_2”,….,”REGEXP_N","REPLACE_TEXT_N”)
status != NULL in log.access | count | group by status in replace format ("200","OK","302","Resource temporarily moved to a new locatio
n","304","Not Modified")
UserAgentDetect – displays an aggregated result broke into groups based on types to view (browser,version,platform,os).
...group by FIELD_NAME in UserAgentDetect format ("TYPE_1+….+TYPE_N”)
status != NULL in log.Access Log | count | group by user agent in useragentdetect format ("browser+version")
Examples – Volume Format: bytes sent column contains numeric values representing volume.
Example 1:
* in log.access | avg bytes sent | display avg in volume format
XpoLog formats avg of bytes sent in volume format, automatically assuming that the log value is in bytes.
Example 2:
* in log.access | avg bytes sent | display avg in volume format(“MB”)
XpoLog formats avg of bytes sent in volume format, automatically assuming that the log value is in bytes, and converts and outputs the value in
MB.
Example 3:
* in log.access | avg bytes sent | display avg in volume format(“KB”,”MB”)
XpoLog formats avg of bytes sent in volume format, assuming that the log value is in KB, and converts and outputs the value in MB.
Examples – Time Format: time taken column contains numeric value representing time.
Example 1:
* in log.access | avg time taken | display avg in time format
XpoLog formats avg of time taken in time format, automatically assuming that the log value is in milliseconds.
Example 2:
* in log.access | avg time taken | display avg in time format(“SEC”) à format to seconds
XpoLog formats avg of time taken in time format, automatically assuming that the log value is in milliseconds, and converts and outputs the value
in seconds.
Example 3:
* in log.access | avg time taken | display avg in time format(“SEC”,”MIN”) à format from seconds to minutes
XpoLog formats avg of time taken in time format, assuming that the log value is in seconds, and converts and outputs the value to minutes.
Regular Expressions:
1. XpoLog groups by URL field which has multiple parts divided by slashes / and then uses a regular expression to format the result to
present only part of the URL based on the regular expression criteria, I.E. present only the last part after the last slash / in the URL:
URL Example:
[URL] /home/web-main/css/texts.css
XpoLog Query:
* in log.access log | count | group by url as formatted-url | order by count desc | display formatted-url in regexp format (".*/([^/]+)")
Result:1.
2. XpoLog uses a regular expression to format the Description field which contains multiple lines with different values based on the regular
expression criteria, I.E. extract from the entire Description field only the value which comes after ''Account Name:'' and group by it only (as
if it was a pre-configured field in the log):
Description Example:
...[Description] An account was logged off.
Subject:
Security ID: S-1-5-21-3480273402-748593870-3636473903-1144
Account Name: xplg
Account Domain: XPOLOG
Logon ID: 0xa078ea24
Logon Type: 3
This event is generated when a logon session is destroyed. It may be positively correlated with a logon event using the Logon ID value.
Logon IDs are only unique between reboots on the same computer.
XpoLog Query:
(*) in log.application | count | group by Description as UserName in regexp format ("Account Name:\s+(\w+)")
Result:
Query Format:
1. XpoLog groups by STATUS which has multiple values, and then based on query criteria it breaks the result to different pieces:
Status values may vary from 200, 302, 404, 500, etc. but in order to break it into two groups 200 - defined as valid and not 200 as not
valid the query format handles it:
XpoLog Query:
* in log.access | count | group by status in query format ("status=200","VALID","status != 200","NOT VALID")
Result:1.
geoip
Synopsis
A display function that groups result events according to the extracted elements of the IP address in one or more of its geoip columns,.
Syntax
geoip ([IP_Column_Name]) group by [country,country code,city,region]
Required Arguments
IP_column_name
Syntax:
Description: The name of the column header that has IP address values
country, country code, city, and/or region
Description: The extracted part of the IP address according to which to group the results.
Optional Arguments
None
Description
For each event that has the specified IP_address_column_name with an IP address value, extracts the country name, country code, city, and/or
region from the IP address, using an internal database, and then shows the result of performing a specific function on the search result events,
according to the country name, country code, city, and/or region, as required.
Examples
Example 1:
* in log.access | count | geoip client ip group by country,city | order by count desc
Creates a summary table of the count of all events in log access, grouped according to the country and the city within the country, both extracted
from the IP address in the client ip column. This table is ordered in descending order of the number of events in each city group.
group by
Synopsis
Groups events according to column values.Syntax
group by [column_name] (in [Format_Type] format)(["Input_Unit"],)(["Output_Unit"]) (, [column_name] …)
Required Arguments
column_name
Syntax:
Description: The name of the column header according to which events are to be grouped.
Optional Arguments
column_name
Syntax:
Description: The name of additional column headers according to which events are to be sub-grouped. Column names should be comma
separated.
Format_Type
Syntax: number, simple, time, date, volume, regexp, expression, query, exception, replace, or useragentdetect
Description: The display format of the column header values in the summary table. See format.
Description
Creates a summary table that categorizes events according to their grouping. Must be preceded by a function.
Examples
Example 1:
* in log.access | count | group by url
Returns the number of events from each URL.
interval
Synopsis
Classifies the search query result events into time buckets of the specified time period.
Syntax
interval N [milliseconds, seconds,minutes,days,weeks,months] starting TIME
Required Arguments
N
Syntax:
Description: The number of units of time into which to classify the search query result events
Unit of time
Syntax: milliseconds, seconds, minutes, days, weeks, or months
Description: The unit of time into which to classify the search query result events
Optional Arguments
starting TIME
Syntax:
Description: The start time of the interval
Description
Classifies the search query results according to time period. Must be preceded by a function.
ExamplesExample 1:
* in log.access | count | interval 1 day
From the events in access log, shows the number of events per day starting at 00:00:00.
Example 2:
* in log.access | count | interval 1 day starting 08:00:00
From the events in access log, shows the number of events per day starting at 08:00:00.
Example 3:
* in log.memoryUsage | avg usage | interval 50 milliseconds
From the events in memoryUsage log, shows the average of used memory (usage) in 50 milliseconds interval.
is
Synopsis
Filters (i.e. narrows) the search results based on a time range.
Syntax
TIME_UNIT is START_TIME-END_TIME
Required Arguments
TIME_UNIT
Syntax: time, hour, minute, second, day of week, or day of month
Description: The time unit of the time range
START_TIME
Syntax:
Description: The start time of the time range
END_TIME
Syntax:
Description: The end time of the time range
Optional Arguments
None
Description
Filters the search results based on a specific time range. The "time is" search query can be built only by using the simple search syntax (see Perf
orming a Simple Search)
Examples
Example 1:
time is 8-16 in log.access
From the events in access log, shows the events starting at 8 and ending at 16.
Example 2:
hour is 10-14 in log.access
From the events in access log, shows the events starting at 10 and ending at 14.
Example 3: day of week is 1-3 in log.access
From the events in access log, shows the events starting at Sunday and ending at Tuesday.
Example 4:
day of month is 10-15 in log.access
From the events in access log, shows the events starting at 10th and ending at 15th.
last
Synopsis
Used to display the last specified number of events resulting from a Simple Search, or the last specified number of summary table entries
resulting from a Complex Search.
Syntax
last [number_of_results] for each [group]
Required Arguments
number_of_results
Syntax:
Description: The number of last search results to display
Optional Arguments
for each group
Syntax: for each
Description: The column name on which the last specific number should display.
Description
When used immediately following a Simple Search query, returns the specified number of last events resulting from the search. When used
immediately following a Complex Search query, returns the specified number of last entries from the summary table resulting from the search.
Examples
Example 1:
* in log.access | last 91
Returns the last 91 events from access log.
Example 2:
http in log.iis log| max time-taken | group by c-ip | last 3
Returns the max time-taken value from events in log.iis log having http in their column values, for the last three c-ip values only.
Example 3:
error in log.xpologlog | count | group by class, method | last 2 for each class
Returns the 2 methods that appeared least in each class in log.xpologlog log having error in their column values.
Example 4:
error in log.xpologlog | count | group by message | interval 1 hour | last 2 for each interval
Returns the 2 messages that appeared least in each 1-hour interval in log.xpologlog log having error in their column values.
list
Synopsis
Displays a list all values under the specified column(s) as appear in the log(s).
Syntaxlist [column_name]
Required Arguments
column_name
Syntax:
Description: The name of a column header that its values should be listed
Optional Arguments
None
Description
Displays a list all values under the specified column(s) as appear in the log(s).
Examples
Example 1:
* in log.application | list event
Returns a list of all values under the Event in the log Application
Example 2:
* in log.application | list event, type
Returns a list of all values under the columns Event and Type in the log Application
max
Synopsis
Calculates the maximum of the values in a specified column in the search query results.
Syntax
max [column_name]
Required Arguments
column_name
Syntax:
Description: The name of a column header that has numeric values
Optional Arguments
None
Description
From all the search query results, returns the maximum value in the specified column_name.
Examples
Example 1:
* in log.access | max Bytes Sent
Returns the maximum value of the column Bytes Sent in the events from access log.
Example 2:
http in log.iis log| max time-taken | group by c-ip
From the events from log.iss log that have the text http in their column values, finds and returns the maximum value in the time-taken column
per each c-ip column value. maxif
Synopsis
Calculates the maximum of the values in a specified column in the search query results based on a query to be executed on the record.
Syntax
maxif [column_name] “[search_query]”
Required Arguments
column_name
Syntax:
Description: The name of a column header that has numeric values
search_query
Syntax:
Description: The search query to be executed on the record
Optional Arguments
None
Description
From all the search query results, returns the maximum value in the specified column_name.
Examples
Example 1:
* in log.access | maxif Bytes Sent "status=200"
Returns the maximum value of the column Bytes Sent in the events from access log only if the value of column status is 200.
Example 2:
* in log.iis log | maxif time-taken "cs-host contains http" | group by c-ip
From the events from log.iss log that have the text http in their cs-host column, finds and returns the maximum value in the time-taken column
per each c-ip column value.
min
Synopsis
Calculates the minimum of the values in a specified column in the search query results.
Syntax
min [column_name]
Required Arguments
column_name
Syntax:
Description: The name of a column header that has numeric values
Optional Arguments
NoneDescription
From all the search query results, returns the minmum value in the specified column_name.
Examples
Example 1:
* in log.access | min Bytes Sent
Returns the minimum value of the column Bytes Sent in the events from access log.
Example 2:
http in log.iis log| min time-taken | group by c-ip
From the events from log.iss log that have the text http in their column values, finds and returns the minimum value in the time-taken column per
each c-ip column value.
minif
Synopsis
Calculates the maximum of the values in a specified column in the search query results based on a query to be executed on the record.
Syntax
minif [column_name] “[search_query]”
Required Arguments
column_name
Syntax:
Description: The name of a column header that has numeric values
search_query
Syntax:
Description: The search query to be executed on the record
Optional Arguments
None
Description
From all the search query results, returns the minimum value in the specified column_name.
Examples
Example 1:
* in log.access | minif Bytes Sent "status=200"
Returns the minimum value of the column Bytes Sent in the events from access log only if the value of column status is 200.
Example 2:
* in log.iis log | minif time-taken "cs-host contains http" | group by c-ip
From the events from log.iss log that have the text http in their cs-host column, finds and returns the minimum value in the time-taken column
per each c-ip column value.
order by
SynopsisOrders the complex search results according to the specified column and in the specified direction.
Syntax
order by [Result_Column_Name] [asc,desc]
Required Arguments
Result_Column_Name
Syntax: character string
Description: The column name according to which the complex search results are to be ordered.
Optional Arguments
asc,desc
Description: Indicates the direction of the ordering of the complex search results - in ascending or descending order of the column name value.
Description
Orders the complex search results according to the specified column in the specified direction - ascending or descending order. If no direction is
specified, orders in ascending order.
Examples
Example 1:
* in log.access | count,start time, end time, time | group by client ip | order by time desc
Calculates the count, start time, end time, and time of the events in log access, groups the events by client ip, and displays them in
descending order of the time.
percent
Synopsis
A function that returns the percentage of search result events.
Syntax
percent
Required Arguments
None
Optional Arguments
None
Description
When used following the initial simple search query, returns the percentage of events resulting from the search. When used iteratively, Calculate
the percentage of results returned from the complex search preceding the pipe.
Examples
Example 1:
* in log.access | percent
Always returns 100%
Example 2:
* in log.application | percent | group by event | order by percent
Returns the percentage of each event in the log application
percentileSynopsis
Calculates the percentile of the values in a specified column of the search query results.
Syntax
percentile [percentage_value]
Required Arguments
percentage_value
Syntax:
Description: The value of a percentage
Optional Arguments
None
Description
A percentile is a measure used in statistics indicating the value below which a given percentage of observations in a group of observations fall
Examples
Example 1:
* in log.system audit | order by process time (ms) | percentile 95
From the events in system audit log, returns the percentile 95% of the events observed by column process time (ms).
region
Synopsis
Displays the region names extracted from the IP address column in the search result events.
Syntax
region [IP_address_column_name]
Required Arguments
IP_address_column_name
Syntax:
Description: The name of the column header that has IP address values.
Optional Arguments
None
Description
For each event that has the specified IP_address_column_name with an IP address value, extracts the region name from the IP address, using
an internal database.
Examples
Example 1:
* log.access | region IPaddress1
For each event in log access, extracts the region name from the IP address in column IPaddress1.
standard deviation
SynopsisCalculates the standard deviation of values in a specified column of the search query results.
Syntax
stdev [COLUMN_NAME]
Required Arguments
COLUMN_NAME
Syntax:
Description: The name of a column header that has numeric values
Optional Arguments
None
Description
Shows how much variation or dispersion from the average exists calculated on values from a specific log column. A low standard deviation
indicates that the data points tend to be very close to the mean (also called expected value); a high standard deviation indicates that the data
points are spread out over a large range of values.
Examples
Example 1:
* in log.access | stdev Bytes-Sent
Calculates the standard deviation of the values under the column Bytes-Sent in the access log
start time
Synopsis
Displays the time of the first event in the group.
Syntax
start time
Required Arguments
None
Optional Arguments
None
Description
Shows the unformatted time of the first event in the group resulting from the search query. Should be formatted and displayed in date format.
Examples
Example 1:
* in log.access | start time | display start time in date format
Display the time of the first event in date format.
sum
Synopsis
Displays the sum of the values in a specified column in the search query results.
Syntaxsum [column_name]
Required Arguments
column_name
Syntax:
Description: The name of a column header that has numeric values
Optional Arguments
None
Description
For each event in the search query results that has the specified column_name with a numeric value, adds the value to the cumulative sum,
and when it reaches the last event, displays the sum.
Examples
Example 1:
* in log.access | sum Bytes Sent
Returns the sum of the values in column Bytes Sent in the events from access log.
Example 2:
http in log.iis log| sum time-taken | group by c-ip
From the events from log.iss log that have the text http in their column values, calculates the sum of the values in the time-taken column per
each c-ip column value.
sumif
Synopsis
Displays the sum of the values in a specified column in the search query results based on a query to be executed on the record.
Syntax
sumif [column_name] “[search_query]”
Required Arguments
column_name
Syntax:
Description: The name of a column header that has numeric values
search_query
Syntax:
Description: The search query to be executed on the record
Optional Arguments
None
Description
For each event in the search query results that has the specified column_name with a numeric value, adds the value to the cumulative sum,
and when it reaches the last event, displays the sum.
Examples
Example 1:
* in log.access | sumif Bytes Sent "status=200"Returns the sum of the values in column Bytes Sent in the events from access log only if the value of column status is 200.
Example 2:
* in log.iis log| sumif time-taken "cs-host contains http" | group by c-ip
From the events from log.iss log that have the text http in their cs-host column, calculates the sum of the values in the time-taken column per
each c-ip column value.
time
Synopsis
Displays the time between the first and last event in a group.
Syntax
time
Required Arguments
None
Optional Arguments
Date_Column_Name
Syntax:
Description: The name of a specific log column which contains a date/timestamp.
Date_Column_Format
Syntax:
Description: The date format of the column Date_Column_Name.
Description
Shows the unformatted amount of time between the first and last event in a group - calculated by default based on the log event''s main date field.
Should be formatted and displayed in time format.
Note: The default time is counted in milliseconds.
Examples
Example 1:
* in log.access | time | display time in time format
Displays the time between the first and last event in log access in time format.
Example 2:
10.10.10.10 in log.access | time | display time in time format
Displays the time between the first and last event with client IP 10.10.10.10 in log access in time format.
Example 3:
* in log.access | time originalTimeStamp ("MM/dd/yyyy HH:mm:ss.SSS") | display time in time format
Displays the time between the first and last event in log access based on the values of the specified log column originalTimeStamp (not the
event''s main date field) which has the specified date format "MM/dd/yyyy HH:mm:ss.SSS" in time format.
transaction
Synopsis
Displays a flow of correlated events from a single or multiple log sources.
Syntaxtransaction ("STEP_I_QUERY", "CORRELATION_I_ID", "STEP_I_NAME"->"STEP_II_QUERY", "CORRELATION_II_ID",
"STEP_II_NAME"->...->"STEP_N_QUERY", "CORRELATION_N_ID", "STEP_N_NAME")
Required Arguments
Each step is represented by the following 3 arguments:
STEP_I_QUERY = a search query that isolate all relevant events of a transaction step
CORRELATION_I_ID = the log field which is used to correlate an event of a step and the next step''s event(s)
STEP_I_NAME = the name of the transaction step that will be presented in the results
Optional Arguments
Transaction Time / STEP_NAME->STEP_NAME Time (calculates the time of a transaction or between 2 steps of the transaction)
transaction eventscount / STEP_NAME eventscount (calculates the events count of a transaction / step of the transaction)
limit time to X hours (limits the maximal time allowed from first event to last event in a given transaction - only events within the time limitation will
be correlated)
use unique key (events used to open a transaction with the same key will be joined to the same transaction)
transaction fullstate = OPEN/CLOSE/PARTIAL CLOSE (OPEN = transactions the don''t contain the closing events, CLOSE = transactions that
contain closing events, PARTIAL CLOSE = transaction which were closed because of a time limitation specified by "limit time to X hours" or
closing events but missing some events internally)
Description
Shows the unformatted amount of time between the first and last event in a group. Should be formatted and displayed in time format.
Note: By default, the Search colors matching events, step(s) which could not be correlated will be grayed. The time of each specific transaction
(marked in green = faster or equals to the average time, marked in red = slower than the average time) and the time taken between each two
transaction steps is presented on the mapped transactions.
To see a transaction''s events, click the ''Show All Events'' link.
Examples
Example 1:
* in log.ORDER_FLOW | transaction
("requesting","TXID","Request"->"authorized","TXID","Authorization"->"dispensing","TXID","Dispense"->"ready
transaction","TXID","Ready"->"end of","TXID","Completed")
Displays the correlated transaction from the log ORDER_FLOW based on the correlation ID (log field) - TXID.
Example 2:
* in log.LOG_1, LOG_2, LOG_3 | transaction ("start transaction in log.LOG_1","TXID","Start"->"processing
transaction in log.LOG_2","TXID","Processing"->"transaction completed in log.LOG_3","TXID","End")
Displays the correlated transaction from the logs LOG_1, LOG_2, LOG_3 based on the correlation IDs (log fields) - TXID.
Example 3:
* in log.ORDER_FLOW | transaction
("requesting","TXID","Request"->"authorized","TXID","Authorization"->"dispensing","TXID","Dispense"->"ready
transaction","TXID","Ready"->"end of","TXID","Completed") | avg transaction Time, max transaction Time, min
transaction Time | display avg as Average Tx Time in time format, min as Fastest Tx Time in time format,
max as Slowest Tx Time in time format
Displays the average, minimum and maximum transaction time.
Example 4:
* in log.ORDER_FLOW | transaction
("requesting","TXID","Request"->"authorized","TXID","Authorization"->"dispensing","TXID","Dispense"->"ready
transaction","TXID","Ready"->"end of","TXID","Completed") | count | interval 5 minute | show count as
Transactions Over TimeDisplays the number of transactions that were correlated in 5 minutes time bucketing.
Example 5:
* in log.ORDER_FLOW | transaction
("requesting","TXID","Request"->"authorized","TXID","Authorization"->"dispensing","TXID","Dispense"->"ready
transaction","TXID","Ready"->"end of","TXID","Completed") | where transaction time > 500 | order by
transaction time desc
Displays all transactions that their total time to be completed took more than 500 milliseconds (result will be sorted in a descending order).
Example 6:
* in log.ORDER_FLOW | transaction
("requesting","TXID","Request"->"authorized","TXID","Authorization"->"dispensing","TXID","Dispense"->"ready
transaction","TXID","Ready"->"end of","TXID","Completed") | avg request->authorization time, max
request->authorization time, min request->authorization time | display avg as Average Request>Authorization
in time format, max as Slowest Request>Authorization in time format, min as Fastest Request>Authorization
in time format
Displays the average, minimum and maximum time of the time taken between the transaction''s steps Request to Authorization.
Example 7:
* in log.ORDER_FLOW | transaction
("requesting","TXID","Request"->"authorized","TXID","Authorization"->"dispensing","TXID","Dispense"->"ready
transaction","TXID","Ready"->"end of","TXID","Completed") | where transaction contains exception
Displays only transactions that contain exception in one or more of it''s log events.
Example 8:
* in log.ORDER_FLOW | transaction
("requesting","TXID","Request"->"authorized","TXID","Authorization"->"dispensing","TXID","Dispense"->"ready
transaction","TXID","Ready"->"end of","TXID","Completed") | avg authorization eventscount
Displays the average number of events in the ''Authorization'' transaction step.
Example 9:
* in log.ops | transaction ("start","TXID_1+TXID_2","Start"->"end","TXID_1+TXID_2","End")
Displays transactions which were correlated by using a combination of 2 log fields TXID_1 and TXID_2.
Example 10:
* in log.ORDER_FLOW | transaction ("requesting","TXID","Request"->"end of","TXID","Completed") use unique
key
Displays the correlated transaction from the log ORDER_FLOW based on the correlation ID (log field) - TXID (events used to open a transaction
with the same key will be joined to the same transaction).
where
Synopsis
Filters (i.e. narrows) the complex search results to display only those column values that meet specific criteria.
Syntax
where [SEARCH QUERY on column results]
Required Arguments
SEARCH QUERY on column results
Syntax: A search query (simple or complex)
Description: Runs a search query on the complex search summary table, to extract and display values that meet specific criteria.
Optional Arguments
None
DescriptionFilters the summary table resulting from a complex search, to extract and display only those values that meet specific criteria defined in the
"where" search query. The "where" search query can be built using the simple search syntax (see Performing a Simple Search) or complex
search syntax (see Complex Search Syntax Reference).
Examples
Example 1:
in log.access log | count | group by status | order by count desc | where count < 500
Shows only those statuses in the summary table which have less than 500 events.
Refining a Search
You can refine (augment) a search using the following two methods:
Adding a column value of an event resulting from the original search to the search query (see Refining a Search Based on Events).
Adding a problem detected in the original search, to the search query (see Augmenting a Search with Detected Problems).
Refining a Search Based on Events
In the Search Results Area resulting from a Simple Search, you can refine the search with column values from the resulting events.
To refine your search based on an event:
1. In the Search Results Area, hover over a column value of an event, and after it is highlighted, click it.
A list of operations opens.
2. Select one of the following operations:
Add to Search (AND) – To search for events matching the current search query AND the highlighted value in the event.
Add to Search (OR)– To search for events matching the current search query OR the highlighted value in the event.
Exclude from Search – To search for events matching the current search query, with the exception of those events that have the
highlighted value in the event (AND NOT).
Replace Search – To replace the search query with the highlighted value in the event.
In the first three operations, the highlighted value is added to the current search query with AND, OR, or AND NOT, respectively. In the
last case, the search query is replaced with the highlighted value.
The new search runs.
The added search condition appears under Active Filters in the Augmented Search Pane.
3. Repeat steps 1 and 2 for all event values that you want to include in your search query.
At any point, you can restore the original query, or remove from the query a filter based on an event, by removing it from the Active Filters
list (see Managing Active Filters).
Augmenting a Search with Detected Problems
As there are many logs with myriads of information, it may be difficult for a user to decide what to search for.
XpoLog Search assists the user in deciding what to search for by displaying for each time period, problems that occurred at that time, along with
the severity of these problems. Although these problems are not errors, they can in fact be the root cause of an error, so that adding them to a
search can be very beneficial.
The user can decide whether to show only predefined problems, autodetected problems, or all problems (see Selecting Augmented Layers for full
details).
Note: The detected problems are not related to the search query.
Dots of varying colors and sizes are displayed on the time axis of the search, representing the problems detected at this time.
The size of a dot is relative to the number of problems found on the time axis. A larger dot represents more detected problems; a smaller dot, less
detected problems.
The color of the dot indicates the severity of the most serious problem found at that time, as follows:
Yellow – low severity
Orange – medium severity
Red – high severity
Hovering on a dot opens a list of suggestions, from which you can drill down to see the events associated with it. It is recommended to hover on
a dot with the most highest-severity problems.
The user can augment a search with a problem from the list by clicking it. The selected problem is then added to the search query with the logical
OR operator.
To augment your search with a detected problem:
1. In the Augmented Layers selection box, select the problems that the system should show on the time axis: Predefined, Autodetected,
or all problems.
2.2. Hover on a dot on the time-axis, and select a problem from the list of problems.
The problem is appended to the search query with the logical OR operator.
Note: If there is no search query, it is added to the search query as is.
The added detected problem appears under Active Filters in the Augmented Search Pane.
3. Repeat step 2 to add an additional problem to your search.
At any point, you can restore the original query, or remove from the query a filter based on a problem, by removing it from the Active
Filters list (see Managing Active Filters).
Selecting Augmented Layers
The time axis of the main graph of a search query displays problems detected by XpoLog at any specific time.
You can define the types of problems that should be presented on the time axis of the XpoLog Search graph:
Predefined – user-defined problems
Autodetected – problems automatically detected by XpoLog
Both Predefined and Autodetected problems (the default)
To select the augmented layers of the search:
In the Graph toolbar, in the Augmented Layers selection box, click Predefined, Autodetected, or both.
The dots on the time axis of the graph refresh according to your selection.
Managing Active Filters
In the Augmented Search Pane, under Active Filters, are listed all the filters that were added to your original search query, based on an event or a
detected problem.
You can remove a filter from the search query, by removing it from the Active Filters list, and you can also restore the original search query.
Removing a Filter From the Search Query
Removing the filter from the Active Filters list removes it from the search query, and automatically runs the search with the resulting query.
To remove a filter from the search query:
In the Augmented Search pane, under Active Filters, click the Remove Filter icon adjacent to the filter that you want to remove.
The filter is removed from the Active Filters list, and the resulting search query runs.
Resetting the Search Query
You can restore a search query to its original state, regardless of the number of filters that have been added to it.
To reset a search query:
In the Augmented Search pane, under Active Filters, click Reset.
The Active Filters list closes, the original search query is restored, and automatically runs.
XpoLog Analytics
XpoLog provides users the ability to initiate their own investigation of problems in the system by using the Search console, Monitors, and
Dashboards. However, users don''t always know what to search for, and may spend valuable time investigating huge amount of data from multiple
sources while troubleshooting a problem. XpoLog Analytics is a proactive console that helps users to see all problems from the log sources in a
unified console over time.
XpoLog Analytics is an automatic Log Analysis and Monitoring console, which automatically scans all logs that enter the system for errors, risks,
statistical problems, and predefined rules. Its Problem Analysis dashboard generates dynamic reports on the detected errors, maps problems
over time, and tagging them according to their severity. From the Problems Analysis dashboard, users have immediate access to the analysis
reports, with easy navigation and zoom-in capabilities to the relevant log data to accelerate problems isolation.
XpoLog''s Analytics console analyzes log data for the following two types of problems:
Predefined Errors – Detects problems that have been predefined as a saved search. Severity can be assigned to saved searches in
XpoLog Search. Once a severity is assigned to a saved search, it will be presented in the Analytics console as a predefined problems.
Auto-Detected Errors – Uses Semantic Content Analysis. Based on semantic analysis of the logs'' contents and predefined
knowledgebase, XpoLog Analytics detects in the logs thousands of errors and events that contain information related to a fault (for
example, events containing the word failure or error). Analytics immediately generates a very high percentage of the problems in the logs
of any application, without any configuration.
If activated, Servers Metrics Analysis displays the CPU, memory, and disk problems on the source servers from which the logs originated. The
problems definition for metrics can be easily customized to meet the environmental constraints.
In addition, the Analytics console runs statistical analysis on multiple use cases to identify unusual behavior in the Application logs.
Problems such as high/low logging activity, applications/servers that stop logging normally, an IP that extensively calls the same URL, are
captured and presented automatically.Accessing the Analytics Console
You can access Analytics from the XpoLog Center homepage or from any page in the application.
To access the Analytics console:
In the homepage, in the navigation pane under Quick Actions, click the Analytics icon
OR
From any XpoLog page, in the Tab Bar, click the Analytics tab.
The Analytics console opens, displaying the analysis of the logs in the system. See Analytics User Interface.
Analytics User Interface
Analytics is equipped with a user friendly graphic user interface (GUI), which provides a complete set of tools to analyze detected problems.
The Analytics user interface includes the following main elements:
Element Description
Tab Bar On the left side, XpoLog, XpoSearch, and Analytics tabs. On the right side, an XPLG icon. For details, see Tab Bar.
Main Menu On the left side, includes the following menu items and submenus for performing actions in Analytics.
Dashboards
Administration
Indexing
On the right side, has a Home button for navigating to the XpoLog homepage, and in organizations where security is
activated, also displays the Username and a Logout button.
View Type Provides three buttons for selecting the graph view type: Folders and Logs, Applications view, or Servers view. See
PanelGraph Display The graph display area enables filtering the graph to show analysis for logs under specific Folders, Applications, or Servers. It
and Time also provides buttons for zooming into or out of the graph, viewing the default view, and has a toolbar with buttons for hiding
Control Panel metrics, summary view, split view, risk view, and total view.
The time control area enables users to select the time frame for which the Analytics analysis is presented; this time frame can
be changed easily at any point.
Problems Displays a graphic distribution of the analysis.
Graph
Problems Presents detailed information of the data displayed in the problems graph. For each member in the Folders and Logs,
Summary Application, or Server (according to what you selected), a summary of the analysis is displayed, each relevant to the specified
Table time frame.
Most Severe For each selected view type and time frame, the Analytics presents the top 10 problems (with the highest severity) that were
Problems found in the analysis.
Table
View Type Panel
The View Type panel user interface includes the following buttons for selecting the graph view:
Button Description
Folders and Logs view – Clicking this button displays Analytics
on the logs in the Folders and Logs structure, as defined in
XpoLog Manager.Applications view – Clicking this button displays Analytics on
logs under the context of their associated applications.
Servers view – Clicking this button displays Analytics on logs
under the context of the servers on which they are located.
Under this view, server metrics analysis is also available.
Graph Display and Time Control Panel
The Graph Display and Time Control Panel includes:
Time control area – enables users to select the time frame for which Analytics is presented; this time frame can be easily changed at any
point.
Graph toolbar – Buttons for selecting the type of analysis displayed by the Problems Graph.The Graph Display and Time Control Panel includes the following elements:
Element Description
Default view button
Clicking this button displays the graph in default view, i.e. in
Folders and Logs Total Summary view for the last seven days.
Title Displays the contents of the graph, according to the selected
view type, filter entities, and display view.
Zoom-Out button Zoom-In button
Time Period Defines the time period during which Analytics is to be run.
Selectable time periods include:
Predefined time periods: Last hour, Last 3 hours, Last 12
hours, Last 24 hours, Last 3 days, Last 7 days, Last 2 w
eeks, Last 3 weeks, Last 4 weeks, Last 3 months, Last 6
months, Last 12 months
Customized time periods: Custom.
Start and End Dates and Times Displays the start and end dates and times of Analytics,
according to the default or selected time period. Clicking this box
opens up calendars for selecting a customized time period (see
Customizing the Analytics Time Period). Open Calendar button. Clicking this button opens up calendars
for selecting a customized time period (see Customizing the
Analytics Time Period).
Filter Entities / Clear Filter Clicking the Filter Entities link opens a filter for filtering Analytics
according to specific folders and logs, applications, or
servers. Clicking the Clear Filter link clears the checkboxes in
the filter so that you can select new selection criteria.
Graph Toolbar This toolbar of buttons above the Problems Graph includes
buttons for adjusting the display of the Problems Graph (see
below).
Graph Toolbar
The Graph Toolbar includes the following buttons:
Button DescriptionTotal View button.
Clicking this button displays a total view of the problem events.
The information displayed in the graph can be either of the
following, depending on whether the Summary View or Split View
button is selected:
Total Summary: When the Summary View button is also
selected; in this case, the graph presents the total number of
events on all the selected logs
Total Split: When the Split View button is also selected; in
this case, the graph presents each of the members (based
on the view: logs/applications/servers) individually with its
own problems mapped over time. In the background, the
total summary graph is presented.
On top of either graph, you can see measurement points in
different colors that indicate problems of varying severities:
Green: no problems were found at that time.
Yellow: problems of low severity (at most) were found at
that time.
Orange: problems of medium severity (at most) were found
at that time.
Red: problems of high severity were found at that time.
Note: A severity level is assigned to all problems automatically
by Analytics. On the other hand, users determine the severity
level of predefined problems and server metrics.
Hovering the mouse over a measurement point, presents a
summary of all the problems that were detected at that time.
Risk View button.
Clicking this button changes the graph to bars format. In the
background, the total summary graph is presented. The
information displayed in the bar graph can be either of the
following, depending on whether the Summary View or Split View
button is selected:
Risk Summary: When the Summary View button is also
selected. For each time slot, the bar level represents the
maximal severity that was found at that time.
Risk Split: When the Split View button is also selected. For
each time slot, each bar level represents the maximal
severity that was found at that time for a specific member
(based on the view –logs/applications/servers).
Hovering the mouse over a bar presents a summary of all the
problems that were detected at that time.
Bar Graph icon.
Clicking this icon displays a bar graph of the event distribution.Toggle button for Show Metrics / Hide Metrics. Clicking this
button shows the server metrics below the Problems Graph.
Clicking this button once again hides the server metrics.
Split View button.
Clicking this button displays the event distribution originating
from each log, separately.
Summary View button.
Clicking this button displays a summary view of the event
distribution from all the logs.
Problems Graph
The Problems Graph Area user interface includes the following elements:
Element DescriptionZoom-in button found above the graph in each timeslot, for
zooming into the selected timeslot. After zooming in, provides a
zoom-out button for returning to the previous zoom level.
,
Graph A graph with the Analytics timeline in the x-axis, and the number
of events in the y-axis.
The graph shows the distribution of events over the selected
timeline.
On the graph there are measurement points that indicate
problems. The severity of these problems is according to the
following color-coding:
Green – no problems were found at that time.
Yellow – problems of low severity (at most) were found at
that time.
Orange – problems of medium severity (at most) were found
at that time.
Red – problems of high severity were found at that time.
Note: The severity level is assigned to all problems automatically
by Analytics, except for predefined problems and server metrics,
which are determined by users.
Hovering the mouse over a measurement point displays a
summary of all the problems that were detected at that time.
Metrics Displays Metrics information (CPU/Memory), provided that Show
Metrics is activated - this feature is deprecated as of version
6.4808
, Previous/Next timeslot buttons.
Clicking these buttons located in the left and right edges below
the graph, displays the distribution of events in the previous/next
timeslot.
Problems Summary AreaThe Problems Summary Area includes the following elements:
Element Description
Problems Enables viewing the table in flat or hierachic view, as well as the parent of an item in the table, if it exists
Summary Table
Toolbar
Problems Presents detailed information of the data displayed in the Problems graph. For each member (Folders and
Summary Table Logs/Application/Server), a summary of the analysis is displayed (each relevant to the specified time frame)
Search Enables searching for a specific term, and in cases where there is more than one page of problems, navigating to a
and Navigation different page of problems
Bar
Problems Summary Table Toolbar
The Problems Summary Table Toolbar includes the following button:
Button Description
View Parent button; this button appears in the toolbar after
performing a drilldown on a folder/log/application/server. Enables
viewing the analytics before the drilldown.Flat View button; presents the list of logs without their
hierarchical context.
Hierarchic View button (default); presents the
folders/logs/applications/servers under their hierarchical context,
i.e. parent folder/application/server on the top view with drilldown
options.
Problems Summary Table
The Problems Summary Table presents detailed information of the data displayed in the Problems Graph.
For each member (Folders and Logs/Application/Server), a summary of the analysis is displayed (each relevant to the specified time frame), as
described in the following table:
Column Description
Name The name of the folder/log/application/server that contains the problem. Clicking the name performs the same function as
the Drill-down button. See below.Logs Status The maximal severity problem found in the logs under the folder/log/application/server
Server Shows a C, M, and/or D icon to indicate that there are CPU, Memory, and/or Disk Space metrics available for the server.
Metrics Hovering over any of these icons, shows a table of the count for each of these metrics.
Logs Events The total number of events found in the logs under the folder/log/application/server
Logs The total number of problems under found in the logs under the folder/log/application/server
Problems
Predefined The number of predefined problems in the logs under the folder/log/application/server
Autodetected The number of automatically detected problems in the logs under the folder/log/application/server
% of The percentage of the total number of problems currently presented in the console.
Problems
Select/Unselect toggle button; Clicking this button displays this member’s analysis individually on top of the Problems Graph;
clicking this button again displays it.
Drill-down / Search in XpoSearch button; clicking this button drills down and displays the analysis only for a specific member
and its submembers until the problem level, at which point, clicking this button searches for the problem in XpoSearch.
Search and Navigation Bar
The Search and Navigation Bar includes the following elements:
Element Description
Search button; Clicking this button opens a textbox for typing a
specific term to find in Analytics under your current time frame
(error code, exception, user, etc.). Analytics refreshes itself with
the search results. After the search has completed, can press the
Clear link to return to the previous Analytics results.
Paging The table presents the analysis of
10 folders/logs/applications/servers per page. If there are more
than 10 folders/logs/applications/servers, you can use the
paging to navigate through the entire list.
Most Severe Problems Table
For each selected view type and time frame, Analytics presents the top 10 problems (with the highest severity) that were found in the analysis:
For each detected problem there is an option to modify its severity or exclude it from the analysis directly in the console.
The Most Severe Problems Table includes the following columns:
Column Description
Source The source folder/log/application/server that contains the
problem
Problem A short description of the problem; clicking the problem has the
same function as clicking the Search in XpoSearch button (see
below).
Type The problem type; can be Predefined, Autodetected, Statistical,
or Metrics
# of Occurrences The number of occurrences of this problem
Severity The severity assigned to this problem
Search in XpoSearch button; clicking this button enables
searching for the event in the log.Customize the Problem Severity button; clicking this button
enables the user to change the problem severity, or exclude the
problem from the analysis. Changes take effect only in future
analyses.
Selecting the Graph View Type
You can view the Analytics graph distribution according to any of the following three views:
Folders and Logs – Logs are presented under their Folders and Logs structure, as defined in XpoLog Manager.
Applications – Logs are presented under the context of their associated applications.
Servers – Logs are presented under the context of the servers on which they reside. Under this view, server metrics analysis is also
available.
Hovering on any point in the graph displays the distribution of the problems in the logs that reside under the selected view type.
To select the view type:
In the View Type panel (see its user interface in View Type Panel), select one of the following three buttons: Folders and Logs, Applicat
ions, or Servers.
The graph is refreshed according to the View Type selection.
Selecting the Analytics Time Period
Time plays a very important role in the examination of the cause of a system problem.
Although Analtyics is automatically performed on the log events from the last seven days, you can set Analytics to run on events that occurred
at any time.
You can select a predefined time period, or customize the time period by selecting the start and end dates and times of the time period.
To select the time period of Analytics:
1. In the Graph Display and Time Control Panel (see its user interface in Graph Display and Time Control Panel), in the Analytics Time
Range textbox, click the down arrow
.1.
A list of selectable time periods opens.
2. From the list of time periods, select a predefined time period (Last hour, Last 3 hours, Last 12 hours, Last 24 hours, Last 3 days, La
st 7 days, Last 2 weeks, Last 3 weeks, Last 4 weeks, Last 3 months, Last 6 months, or Last 12 months), or select Custom to
specify your own time period (see Customizing the Analytics Time Period for a detailed explanation on customizing the time period).
The selected time period is displayed in the textbox, and Analytics runs on this time period.
Customizing the Analytics Time Period
You can customize the time period of Analytics, by selecting from calendars the beginning and end dates and times of the time period.
To customize the time period:
1. In the Graph Display and Time Control Panel, in the Time Period selection box, select Custom
OR
Click the Open Calendar icon.
Two calendars – one of the start date and one of the end date of the previous Analytics time period are displayed.
2. In the left calendar, repeatedly click the arrows at the left and right of the month name, to scroll to previous/following months, until you
reach the desired month of the start date. Then, in the calendar, click the desired start date.
The day is highlighted in the calendar, and is displayed below the calendar in Start Date.
3. In Start Time, type the time of day that the time period begins.
4. In the right calendar, repeatedly click the arrows at the left and right of the month name, to scroll to previous/following months, until you
reach the desired month of the end date. Then, in the calendar, click the desired end date.
The day is highlighted in the calendar, and is displayed below the calendar in End Date.
5. In End Time, type the time of day that the time period ends.
6. Click Go.
Analytics runs on the selected time period, returning the Analytics results for the customized time period.
Note: The Time Period box displays Custom.
Displaying the Default View
By default, any log added to XpoLog is analyzed automatically. Analytics refreshes its analysis every five minutes (the predefined default), and
displays its results in default view – analysis of all logs under Folders and Logs in Total Summary view, during the last 7 days, with Metrics dis
played. You can run Analytics for a different view type, view, and/or time interval, can zoom in or out of time intervals, and can hide metrics. At
any point, you can redisplay the default view.
To display the default view:
In the Graph Display and Time Control Panel, click the Default view button.
The Analytics results are displayed in default view – Folders and Logs Total Summary view for the last 7 days, and with Metrics.
Understanding the Analytics Problems Graph
Analytics returns a Problems Graph that shows the distribution of problems over time. You can determine the display mode and contents of the
graph. The graph has drilldown functionality, enabling you to zoom into the entire graph or into any timeslot, and run Analytics on the new time
period or timeslot. It also enables you to hover over a measurement point to see the source of problems and drill down to see the exact problems
in any log. You can also view the Analytics of the previous or next timeslot.
Defining the Problems Graph
XpoLog enables you to determine the contents and display of the Problems Graph using the buttons located on the Problems Graph toolbar – on
the left, the display control buttons: Total View and Risk View, and on the right, the content control buttons: Summary View and Split View.
A graph in Split View displays one line per source of problems in that timeslot, where each line represents the number of problems in
a single source of problems. Also, the Summary View is displayed in the background.
A graph in Summary View displays a single line to represent all problems from all problem sources.
A graph in Total View shows the distribution of problems in a line graph (the default). A line graph shows how the number of problems change
from one point in time to the next.
A graph in Risk View shows the distribution of problems in a bar graph. The height of each bar is according to the number of problems that
occurred at the specific time. A bar does not appear at times when no problems were detected.
The Problems Graph can display four types of analyses, from the four possible combinations of display and control buttons, as follows:
Total Summary
Total Split
Risk Summary
Risk Split
Total Summary
The Total Summary graph presents the total number of events in all the selected logs, with the top of the graph showing different measurements
points of different colors that indicate the severity of the problems:
Green – No problems were found at that time.
Yellow – Most of the problems found at that time are of low severity.Orange – Most of the problems found at that time are of medium severity.
Red – Problems of high severity were found at that time.
Note: Analytics automatically assigns a severity level to all problems, except predefined problems and server metrics, for which users determine
the severity level. Hovering over a measurement point displays a table for a single source of problems, unless the measurement point is the same
for more than one problem source, in which case the table shows the problem distribution for all the problem sources passing through that point.
To display your graph in Total Summary View:
In the Problems Graph Toolbar, on the left side, click the Total View button, and on the right side, click the Summary View button.
Total Split
The Total Split graph presents each of the members (based on the view – logs/applications/servers) individually, with its own problems mapped
over time. In the background, the Total Summary graph is presented.
To display your graph in Total Split View:
In the Problems Graph Toolbar, on the left side, click the Total View button, and on the right side, click the Split View button.
Risk Summary
The Risk Summary graph is the same as the Total Summary Graph, but in bars format. For each time slot, the bar level represents the maximal
severity that was found at that time. In the background, the Total Summary graph is presented.
To display your graph in Risk Summary View:
In the Problems Graph Toolbar, on the left side, click the Risk View button, and on the right side, click the Summary View button.
Risk Split
The Risk Split graph is the same as the Total Split graph, but in bars format. For each time slot, each bar level represents the maximal severity
that was found at that time for a specific member (based on the view –- logs/applications/servers). In the background, the Total Summary graph is
presented.
To display your graph in Risk Split View:
In the Problems Graph Toolbar, on the left side, click the Risk View button, and on the right side, click the Split View button.
Filtering the Problems Graph
By default, the Problems Graph shows the results of Analytics on all folders and logs, applications, or servers, depending on the view type that
you selected. You can show the Problems Graph on specific logs, applications, or servers, by using the Filter feature. You can also change the
View Type directly from the Filter feature.
To filter the entities in the Problems Graph:
1. In the Graph Display and Time Control panel, click the Filter Entities link.
A filter opens for selecting specific items under the selected view type - folders and logs, applications, or servers.
2. Select the checkboxes of the specific items for which you want to view the Problems Graph. You can also click the Open list icon to
select a different View Type.
3. Click the Go button.
The Problems Graph is generated for the selected entities.
Clearing the Filter
You can refresh the Problems Graph to display all the entities under the selected view type.
To remove the filter:
1. In the Graph Display and Time Control panel, click the Clear Filter link.
The Problems Graph is generated for all entities of the selected View Type.
Hiding/Showing Server Metrics
XpoLog measures CPU level, memory level, and disk usage on any server that contains logs that XpoLog analyzes. You can choose to show or
hide a graph of these server metrics below the Problems Graph, and in parallel, show or hide the Server Metrics links in the Problems Summary
table.
To show server metrics in Analytics:
In the Graph Display and Time Control panel, click the Show Metrics button.
The server metrics graph is displayed below the problems graph, and the Server Metrics column appears in the Problems Summary
table. The Show Metrics button toggles to a Hide Metrics button.
To hide server metrics in Analytics:In the Graph Display and Time Control panel, click the Hide Metrics button.
The server metrics graph below the problems graph is hidden, and the Server Metrics column is concealed in the Problems Summary
table. The Hide Metrics button toggles to a Show Metrics button.
Zooming Into / Out of the Graph
You can zoom into the Analytics graph, so that you can see a more detailed breakdown of problems over a smaller period of time. Zooming into
the graph displays the Analytics graph for half the length of the previous time interval, showing more specific results for each unit of time.
You can also zoom out of the graph, so that you can see a breakdown of problems over a longer period of time. Zooming out displays the
Analytics graph for twice the length of the previous time interval, showing less specific results for each unit of time.
Zooming Into the Analytics Graph
To zoom into the Analytics graph:
In the Graph Display and Time Control Panel, click the Zoom-In button.
The graph is displayed for half of the previous time interval.
Zooming Out Of the Analytics Graph
To zoom out of the Analytics graph:
In the Graph Display and Time Control Panel, click the Zoom-Out button.
The graph is displayed for twice the previous time interval.
Zooming Into /Out of a Timeslot
You can zoom into a specific timeslot in your graph, so that you can see a more detailed breakdown of problems over a smaller period of time.
For example, in an Analytics graph that shows the distribution of problems for a period of seven days, you can zoom into any timeslot (day) to
focus on the distribution of problems during that day, and you can zoom in further to see the distribution of problems in a specific hour on that day.
At any point, you can zoom out repeatedly until you reach the graph resulting from the original time period.
Zooming Into a Timeslot
To zoom into a timeslot:
In the graph, in the timeslot which you want to zoom into, click the Zoom-In button.
The zoomed-in timeslot is subdivided into smaller timeslots. The Zoom-Out button appears, enabling you to zoom out to the previous
display. The time period of Analytics is automatically changed to Custom.
You can repeatedly click the Zoom-In button to see an increasingly more detailed distribution of the problems.
Zooming Out of a Timeslot
To zoom out of a timeslot:
In the graph, click the Zoom-Out button.
You can repeatedly click the Zoom-Out button until the graph is displayed for the original Analytics time period. At this point, the Zoom-O
ut button is no longer displayed.
Viewing the Previous/Next Timeslot
You can display directly from the graph, a graphical representation of the problems in the previous or next timeslot.
To display the previous timeslot:
Below the graph, on the left, click the Previous Timeslot button.
The entire problems graph shifts to the left to display the previous timeslot.
To display the next timeslot:
Below the graph, on the right, click the Next Timeslot button.
The entire problems graph shifts to the right to display the next timeslot.
Viewing the Distribution of Problems
You can hover over any measurement point in the graph to view a table with the number of problems that were found under Folders and Logs,
Applications, or Servers (depending on the chosen View Type). From this table, you can drill down on a specific Folder or Log, Application log, or
Server log in that timeslot. You can keep drilling down until the actual problems are displayed in the table. At this point, the drilldown feature
changes into an XpoSearch feature; you can run XpoSearch to search for the problem in all events in the log for the drilled down timeslot, by
clicking the problem directly in the table.
For example, initiating Analytics for Applications results in a problems graph for all entities under Applications. Hovering over a measurementpoint in the resulting problems graph displays a table with the distribution of problems for Windows Event Logs, App1, App2, and App3. Drilling
down on Windows Event Logs displays a graph of the distribution of the Security, System, and Application Logs over the timezone of the previous
measurement point. Hovering over a measurement point displays a table of the distribution of problems in the Security, System, and Application
Logs. Drilling down on any of these logs shows the various problems in the logs and their distribution. For example, drilling down on the
Applications log shows problems message was not found and shutting down. At this point, you can click a problem to run XpoSearch to find all
events in the log in the timeslot of the measurement point, which contain this problem. For example, clicking message was not found
automatically runs XpoSearch for "message was not found" in log.Application.
To view the distribution of problems at a specific point in time:
Hover over a measurement point in the graph.
A table displays the distribution of problems in the entities under the selected View Type (Folders and Logs, Applications, or Servers).
Drilling Down
You can drill down repeatedly on any entity under Folders and Logs, Applications, or Servers in a Problem Distribution Table, until you get to the
most detailed level - a table of the problems detected by Analytics.
To drill down on an entity in the Problem Distribution Table:
Click the name of the entity to drill down
Or
Click the Drill-Down button on the row of the entity
Analyzing Analytics Problems
The Problems Summary table presents detailed information of the data displayed in the Problems graph (see Problems Graph). For each member
(from Folders and Logs, Applications, or Servers), a summary of the analysis is displayed for the specified time frame.
You can view the table in either Hierarchic view (the default) or Flat view.
From this table, you can:
Drill down on any member in the table to view the Analytics analysis only for a specific member and its sub members.
View a specific member''s analysis presented individually on top of the problems graph
Run a search for a specific term that you would like to find in the Analytics under your current time frame (error code, exception, user,
etc.)
Use the paging to navigate through the entire list of problems, if the analysis presents more than 10 members.
Viewing Server Metrics
You can view details of the CPU, memory, and disk usage of servers in Analytics, provided that you selected to show metrics.
To view server metrics:
1. In the Problems Summary table, under the Server Metrics column, click the C, M, or D icon.
A table appears showing the CPU, memory, and disk usage.
Viewing the Problems Summary Table in Hierarchic/Flat View
You can view the Problems Summary table in either of the following two views:
Hierarchic View (default) – presents the members under their hierarchical context, i.e. parent folder/application/server on the top view
with drilldown options
Flat View – presents the list of logs without their hierarchical context
To view the Problems Summary table in Hierarchic View:
In the Problems Summary table toolbar, click the Hierarchic View button.
To view the Problems Summary table in Flat View:
In the Problems Summary table toolbar, click the Flat View button.
Drilling Down
You can drill down on any element in the table to see an analysis of its members. You can keep drilling down until you get to the lowest level - a
list of problems in the table.
To drill down on a table element:Click the name of the member or click the drilldown button at the end of the table row.
Analytics refreshes to show the analysis for the selected member only.
Above the lowest level, the submembers of the selected member are listed in the table; at the lowest level, the problems are listed.
Searching For a Problem
Once you identify an interesting problem, XpoLog provides a number of options. You can either choose to investigate the data source for more
problems, or expand the analysis to more sources under the same Application or Server. In addition, a common option that is available is to
search for the problem in the data source using the search engine, or to search for other specific problems that were discovered in the Analytics
console.
Searching For a Problem in the Analytics Console
You can search for a problem in the Analytics console by clicking the Search button located on the left side of the Search and Navigation Bar
below the list of data sources or problems in the Problems Summary table.
For example, searching for "Failure" at the problem level, shows all problems containing the word "failure", such as "Login failure" and "Audit
failure". Searching for "Failure" at the log, application, or server level, shows the distribution of problems containing the word "Failure" in these
members.
To search for a problem in the Analytics console:
1. Below the Problems Summary table, on the left side of the Search and Navigation Bar, click the Search icon.
A Search textbox opens.
2. In the textbox, type the text to search for in the list of problems that was detected by Analytics, and then click the Search button.
The console loads data sources that contain the problems the meet your search criteria. If the search was made on a number of Logs,
Applications, or Servers, the search results are presented across those data sources.
Searching for a Problem in XpoSearch
You can search in the log for any problem that is detected by Analytics, by zooming into the XpoSearch search engine from any problem in the
Problems Summary table. If XpoSearch finds the problem, it displays it in the search results area, and the Analytics engine highlights the priority
accordingly.
To search for a problem in XpoSearch:
In the Problems Summary Table, click the problem or click the Search in XpoSearch button at the end of the row of the problem.
XpoSearch opens, running a search for the problem in the log events. The problem text is highlighted in the events of the log.
For example, drilling down on the Windows Event Logs folder lists the distribution of problems in the Security, Application, and Systems logs.
Drilling down on the Application log, displays the problems in the log, including "error code". Clicking "error code" starts a search in XpoSearch: "e
rror code" in folder.Windows Event Logs in log.Application, and the resulting events have "error code" highlighted in them.
Customizing the Severity of a Severe Problem
You can customize the severity level of a problem in the Most Severe Problems table. For example, you can change a problem assigned a
severity level of High to Medium or Low. Customization changes take effect only in future analyses.
To change the severity of a problem:
1. In the Most Severe Problems table, at the end of the row of the problem whose severity you want to change, click the Customize
problem severity link.
2. In the menu that opens, click the severity that you want to assign to the problem.
Excluding a Severe Problem from Analysis
You can exclude a problem in the Most Severe Problems table from future analyses.
To exclude a problem from future runs of Analytics:
1. In the Most Severe Problems table, at the end of the row of the problem which you want to exclude, click the Customize problem
severity link.
2. In the menu that opens, click Exclude from analysis.
Searching for a Severe Problem in XpoSearch
You can search in the log for any problem that is detected by Analytics as one of the ten most severe problems, by zooming into the XpoSearch
search engine from any problem in the Most Severe Problems table. If XpoSearch finds the problem, it displays it in the search results area, and t
he Analytics engine highlights the priority accordingly.
To search for a most severe problem in XpoSearch:
In the Most Severe Problems Table, click the problem or click the Search in XpoSearch button at the end of the row of the problem.
XpoSearch opens, running a search for the problem in the log events. The problem text is highlighted in the events of the log.
XpoLog Manager
XpoLog Center Main FeaturesXpoLog features multiple modules that offer proactive analysis, problem isolation, log correlation, log analysis, log search engine, data
visualization, and proactive monitoring. The solution offers the following main features:
Advanced Logs Search Engine
Web based Log Viewer for any log
Comprehensive Live Data Visualization
Logs correlation
Logs Monitoring
Out of the Box Errors Detection - trends, anomalies, stats, etc.
Enterprise Security Integration
See XpoLog summary sheet for more information
See XpoLog data sheet for more information
XpoLog Center Modules
XpoLog Center includes three modules: XpoLog Apps, XpoLog Search, XpoLog Analytics and XpoLog Manager.
XpoLog Apps
XpoLog Apps centralizes the data visualization capabilities of the platform. Under each App a set of Live Dashboards can be managed to create a
live visualization of the data that is managed in XpoLog. The Apps provide an easy way to manage multiple visualizations under a logical structure
that makes it easier to identify issues, trends in the organization.
Accessible via the Apps tab in the main screen on the top left corner.
XpoLog Search
XpoLog Search (XpoSearch) allows you to perform centralized searches across multiple data sources. Using the XpoSearch interface, you can
search all the logs in XpoLog Center, including applications, servers, network devices, and database tables. You can search values using
common search syntax such as Boolean operators, wild cards, and regular expressions. Through its intuitive language, you can search specific
terms, combined phrases, any text, IP addresses, numbers, and more, and then view and analyze the results, while creating monitors, filters, and
reports. Advanced capabilities include complex search syntax for measuring time of events, computing averages, calculating aggregation in time
bucketing, and more.
Accessible via the Search tab in the main screen on the top left corner.
XpoLog Analytics
XpoLog Analytics offers automated monitoring and problem isolation. It automatically scans the logs for errors, risks, and anomalies according to
predefined rules. It generates dynamic reports and sends alerts as soon as new risks or problems are detected. Each event is mapped to a risk
level according to the error message. Analytics also aggregates and computes statistics of many dimensions in the log events: the amount of
events over time, type of message over time, risks, anomalies, and more. When these aggregated statistics exceed the normal thresholds,
XpoLog alerts the relevant user.
Accessible via the Analytics tab in the main screen on the top left corner.
XpoLog Manager - Platform Administration
XpoLog Manager includes the administration screens for managing the information, which is covered in the Administrator Guide, as well as
several features for the end-user:
Log Viewer – A dedicated real-time log viewer that allows basic navigation through the various logs, opening specific logs, displaying
specific log records, filtering, customizing a log, and exporting a log.
Log Monitor – A monitoring engine that verifies the logs'' contents and alerts when a rule matches the log records.
Accessible via the Manager entry in the main screen on the top right corner.
Accessing XpoLog Manager
You can access XpoLog Manager from any page in the application.
To access the XpoLog Manager console:
In the Tab Bar, click the XpoLog tab. The XpoLog Manager console opens. See XpoLog Manager User Interface Elements.
XpoLog Manager User Interface Elements
XpoLog Manager is equipped with a user friendly graphic user interface (GUI), which provides a complete set of tools for administrators tomanage log information, and for end-users to navigate through system logs, and view, filter, customize. and export them. It also provides a Log
Monitor that verifies the logs'' contents and sends alerts when a rule matches the log records.
The XpoLog Manager user interface includes the following main elements:
Element Description
Tab Bar On the left side, Apps, Search, and Analytics tabs. On the right side, the Manager tab. For details, see Tab Bar at XpoLog
Homepage.
Main On the left side, includes the following menu items and submenus for performing actions in XpoLog Manager.
Menu
Log Actions
Administration
Configuration
Tools
Settings
The main XpoLog logo on the left hand side, is the Home button for navigating to the XpoLog homepage, and in organizations
where security is activated, on the right hand side also displays the Username and a Logout button.
Main Contains icons that can be clicked to navigate to Log Viewer, Log Monitor, etc. as well as icons that can be clicked to perform the
Pane following actions: Add Log, Add Logs Directory, Application Detection Wizard, Create Monitor and Settings.
More Contains several shortcuts to different administrative tasks: Create Application, Add Account, Add Task, and View Wizards.
Actions
Section
Get Help The items in this section can be used to get help from the XpoLog support team. Contact us by email (support@xplg.com), visit our
Section online knowledge base or submit tickets online to get assistance from our team.Presents detailed information of the data displayed
in the problems graph. For each member in the Folders and Logs, Application, or Server (according to what you selected), a
summary of the analysis is displayed, each relevant to the specified time frame. Submit ticket online, Online knowledge base, Read
tutorials, and Send email to XpoLog.
XpoLog Manager Main Menu
The Main Menu of the XpoLog Manager console includes the following items:
Element Description
Log Actions This menu enables navigating directly to the following consoles of the Log Manager:
Start Page
Log Viewer
Monitors
Administration This menu enables administrators to navigate directly to the following administrative consoles and wizards of the Log
Manager:
Folders and Logs – a console that presents all the folders and logs defined in XpoLog and enables users to create
and/or modify folders and logs.
Add Logs DIrectory – a wizard that enables adding multiple logs that are located under a directory to XpoLog.
Add Log – a wizard that enables adding a local or remote log to XpoLog.
AppTags – a console that presents all the AppTags defined in XpoLog and allows users to create and/or modify them.
Collection Policies – a console that manages the collection policies defined in XpoLog.
Cloud – Cloud accounts management.
See Administration Guide
Configuration This menu enables administrators to do the following:
Templates – configure predefined settings for a specific type of log; used to accelerate and automate the configuration
process. Templates include the log data pattern, filters, and metadata.
Save as Template – enables saving a log as a template.
Export Template – enables exporting user defined templates to another instance of XpoLog.
Import Template – enables importing user defined templates from another instance of XpoLog.
Global Filters – defining global filters that will be available on all logs when opened in the Log Viewer.
See Administration Guide Tools This menu enables performing the following:
Export Log – exports a log from XpoLog
Import Log – imports a log to XpoLog, provided that it was exported with its configuration from XpoLog
Export Folder – exports a folder from XpoLog
Import Folder – imports a folder to XpoLog, provided that it was exported with its configuration from XpoLog
Address Book – presents all the connectivity accounts that are available in XpoLog and enables creating, modifying, or
removing the account.
Snapshots – presents all the snapshots that are available in XpoLog and enables viewing, modifying, exporting, and
removing snapshots.
Tasks – presents all the tasks (operations that can be executed by XpoLog based on a scheduler) that are available in
XpoLog and enables creating, modifying, and removing tasks.
System Status – presents the system status console.
Settings This menu enables administrators to open a page to configure the settings for the following:
License – used to update the license of XpoLog
General – includes tabs for configuring General, Security, Mail, Log Settings, Connection Policies, and Advanced s
ettings.
Bug Tracking Systems – contains integration to several Bug tracking systems such as Bugzilla by the Mozilla projects
and JIRA by Atlassian
Log Viewer – used to update settings common to all log views
Environment Variables – used to specify variables that can be used all across XpoLog
Audit – used to generate a filtered log view on audited data
About – used to view XpoLog’s version and installed patches
See Administration Guide
Username Only displayed in organizations where security is activated.
Logout button Only displayed in organizations where security is activated. Clicking this button logs you out of XpoLog Center.
Log Viewer
Any log that has been added to XpoLog (local, remote Windows machine, remote UNIX machine, database tables, remote XpoLog instances,
merged logs, and more) can be viewed, searched, tailed and more in the Log Viewer in realtime.
Once a log is mapped to XpoLog, a simple click on the log presents it in the Log Viewer console.
In the Log Viewer, you can open, view, and investigate multiple logs from multiple remote data sources in different tabs, for an enhanced view of
several logs. To view multiple log sources in a single screen you should use the XpoLog Search. The Log Viewer is a dedicated view per log
source in your browser.
You can specify the number of log records that are displayed in the Log Viewer, and can then easily and conveniently browse through the log
using the toolbar buttons, to navigate to the next page, previous page, end of log, or beginning of log.
You can also use the Search and Quick Filter features to perform fast searches and quick filters on the log, using specific terms, regular
expressions, time range, and more. For more complex searches, you can use a regular filter.
You can also save any filter in the system for later use.
Opening a Log in the Log Viewer
The Log Viewer can be opened from the homepage, or from the XpoLog console''s Log Manager, Folders and Logs menu, or Log Actions menu
item.
The system folders and logs that can be opened in the Log Viewer are arranged in the left pane under Folders and Logs, according to a
hierarchy decided in the organization.
Note: A regular log is preceded by a Log icon. A merged log, composed of log records from more than one log, is preceded by a split Log icon
(i.e. with a line down the middle). A merged log has a Source column as its first column, which contains the name of the source log of the record.
To open the Log Viewer:
1. In the XpoLog homepage,
Under Quick Actions, click the Log Viewer icon
OR
Click the XpoLog tab, and in the XpoLog console that opens, click one of the following:
A log under Folders and Logs in the left pane; in this case, the log opens in the Log Viewer; proceed to the end of this
procedure. 1.
The Log Viewer icon under Log Manager, and then a log under Folders and Logs in the left pane
Log Viewer in the Log Actions menu, and then a log under Folders and Logs in the left pane
The Log Viewer opens in the main pane.
The system folders and logs, arranged according to a hierarchy decided in the organization, are displayed in the left pane.
2. In the left pane, under Folders and Logs, expand the relevant folder until you reach the lowest level – the Log level, and then click the
log that you want to open.
The Log Viewer opens, displaying the selected log in the main pane in tabular view.
After a log is displayed in the Log Viewer, the Quick Filter, Filters, and Actions menus appear in the left pane, for quick filtering, filtering, and
performing actions on the log displayed in the Log Viewer.
Loading Realtime Records into the Log Viewer
At any time, you can load the last 25 realtime log records into the Log Viewer.
Note: The last 25 records are loaded, regardless of the number of records that you selected to display.
To load realtime records:
1. In the status bar, click the Tail button.
The Log Viewer displays the 25 tail records of the log.
The Tail button becomes red.
It is possible to deactivate the tail at any time by clicking the Stop button, located in the status bar to the left of the Tail button. In this case, the
Tail button reverts to its regular color (gray).
Maximizing Log Viewer
Upon opening the Log Viewer, the Log Viewer is displayed in the main pane, and the menu is displayed in the left pane. You can maximize
the Log Viewer over the entire width of the screen.
To maximize the Log Viewer:
In the top-left corner of the Log Viewer, click the Maximize button.
The left pane is hidden, and the Log Viewer is displayed on the entire screen.
The Maximize button becomes a Minimize button.
Restoring Log Viewer to Normal View
You can restore the maximized Log Viewer to its normal view.
To restore the Log Viewer to its normal view:
In the top-left corner of the Log Viewer, click the Minimize button.
The left pane is open, and the Log Viewer is displayed in the main panel.
The Minimize button becomes a Maximize button.
Refreshing the Log Viewer
At any time, you can refresh the Log Viewer to display the most recent log records. Also, refreshing is required after you change the Number of
Records value in the toolbar.
To refresh the Log Viewer:
1. In the status bar, click the Refresh button.
The Log Viewer is refreshed with the latest log records, according to the number of records selected in the toolbar.
Selecting the Log Viewer Display Mode
You can view the log records in the Log Viewer in either list view or tabular view (default).
To view the log records in list view:
In the Log Viewer toolbar, click List View.
The log records are listed in the Log Viewer in their raw format.
To view the log records in tabular view:
In the Log Viewer toolbar, click Tabular View.
A table is presented; its column headers are the field names of the log, and each table row contains the field values of a log record.Displaying Specific Log Records
A log can have millions of records, and therefore it is not possible to display them all. The Log Viewer enables you to select the number of records
that you display in the Log Viewer. You can select to display in the Log Viewer, 25, 50, 100, 250, or 500 log records at a time. You can then use
the navigation arrows to display the previous group of records in the log, the first group of records in the log, the next group of records in the log,
or the last group of records in the log.
For a simple display, the Log Viewer displays the final number of records that you selected. For a character string search, the selected number is
the number of records that are searched from the beginning of the log, and for a quick filter or regular filter, the selected number is the number of
records that are filtered from the beginning of the log.
To select the number of log records to display in the Log Viewer:
1. In the navigation area in the toolbar, in the Number of Records textbox, click the down arrow and from the dropdown list that opens,
select the number of records to display.
2. In the status bar, click the Refresh button.
The Log Viewer displays the last group of records in the log, consisting of the number of records that you selected.
Displaying the Previous Group of Records
You can display the previous group of records, consisting of the number of records that you selected, provided that the currently displayed
records are not at the beginning of the log.
For example, if you selected to display 50 records, and you now click the Previous Records button, the previous 50 records are displayed.
To display the previous group of records in the log:
1. In the navigation area in the toolbar, click the Previous Records button.
The previous group of records in the log is displayed, consisting of the number of records that you selected.
Displaying the Next Group of Records
You can display the next group of records, consisting of the number of records that you selected, provided that the currently displayed records are
not at the end of the log.
For example, if you selected to display 100 records, and you now click the Next Records button, the next 100 records are displayed.
To display the next group of records in the log:
1. In the navigation area in the toolbar, click the Next Records button.
The next group of records in the log is displayed, consisting of the number of records that you selected.
Displaying the First Group of Records
You can display the first group of records, consisting of the number of records that you selected.
For example, if you selected to display 250 records, and you now click the First Records button, the first 250 records in the log are displayed.
To display the first group of records in the log:
1. In the navigation area in the toolbar, click the First Records button.
The first group of records in the log is displayed, consisting of the number of records that you selected.
Displaying the Last Group of Records
You can display the last group of records, consisting of the number of records that you selected.
For example, if you selected to display 500 records, and you now click the Last Records button, the last 500 records are displayed.
To display the last group of records in the log:
1. In the navigation area in the toolbar, click the Last Records button.
The last group of records in the log is displayed, with the group containing the number of records that you selected.
Quick Find - Searching for Text in Log Records
You can search for any character string in the log records using the Quick Find feature. The search displays in the Log Viewer, the first group of
records, consisting of the number of records selected in the toolbar, which has at least one record containing the character string, and highlights
the character string in yellow. You can then navigate to the first, last, previous, or next group of records in the log, to view the highlighted
character string in other records in the log. For example, searching for "Head" displays the first group of records in the log that contains in at least
one of its records the string "Head".
Note: The search is not case sensitive.To search for text in your log records:
In the textbox adjacent to the Find button, type the character string to search for, and then click the Find button.
The text is highlighted in yellow in the first group of records in the log, consisting of the selected number of records.
A Reset button is displayed near the Find button, enabling you to reset the Log Viewer display to its state before the Find (see Resetting
the Log Viewer).
Quick Filtering Log Records
You can filter the log records to display only those records that contain the search string. The XpoLog Manager displays the first records in the log
containing the found string, according to the number of records that you selected, and highlights in yellow, the string in each record. It also places
a zoom-in icon at the head of each filtered record. For example, searching for "Get" displays the first records in the log containing the string
"Get", according to the number of records that you selected.
Note: The search is not case sensitive.
To quick filter log records:
In the textbox adjacent to the Filter button, type the string to use as filtering criteria, and then click the Filter button.
Only those records containing the string are displayed, according to the number selected in the dropdown box, and the string is
highlighted in yellow in each record.
A Zoom-in icon is displayed to the left of each record.
A Reset button is displayed near the Filter button, enabling you to reset the Log Viewer display to its state before the Quick Filter.
The Quick Filter criteria is saved, and can later be run by selecting it from the Filter Selection dropdown list.
Filtering Log Records
Filtering the log data in order to find specific data format, ID, problem or other pieces of information is crucial in any data mining process. The
filtering mechanism allow visual, form type query definition of filtering rules that either search for data in log source or alternatively filter out noise
from the view in order to focus on specific data set.
The filter section will explain how to use, create and edit filters. The section will also explain how to browse the log data while filter
is actively applied.
Creating a New Filter
You can create a new filter for the log displayed in the Log Viewer, by filling in the parameters in the customized form that XpoLog provides for
this log.
A newly created filter is automatically saved in the system, and can be used at a later time, by simply selecting it from a list of saved filters.
To create a new filter:
1. In the Log Viewer toolbar, click the Filter Menu button, and in the menu that appears, click New.
The Filter definition dialog box opens. A name is automatically generated for the filter.
2. In Name, type a meaningful name for the filter to replace the name automatically generated by the system.
3. Leave the use search engine checkbox selected; this indicates to the filter to use indexing, which expedites the search.
4. In Description, type a meaningful description for the filter.
5. In Query, type a search query using the simple and complex search syntax rules.
6. Under Date and Time, select the Dates limit option to show the log records that arrived before or after a specific date and time, or select
the second option (dynamic) to show log records from a period of time relative to the time that the filter is run.
7. Under Text, in the textbox, type a numeric or character string, and from the dropdown list, indicate whether to search for records that equ
als / not equals (for numeric strings), or contain / not contain (for character strings) the text.
8. Select one of the following checkboxes:
match whole word - only highlight words in records that are an exact match to the searched text, and do not highlight words that contain
the searched text.
case sensitive - only highlight words in records that are the same case as the searched text.
regular expression - only highlight regular expressions in the records that match the searched text.
9. Repeat steps 7 and 8 for each search text (up to four).
10. Select one of the following options:
search in all columns - to search for the text in all columns of the record
search in these columns - to search in specific columns of the record; in this case, select a column to add, and click Add to place it
under the Only list.
11. If you want Analytics to regard this filter as a predefined rule, set the severity of the filter rule to Low, Medium, or High. Otherwise, leave
none.
12. Click Save.
The filter is saved and run on the log.
Editing an Existing Filter
You can edit the parameters of an existing filter, as required.
To edit a filter:
1.1. In the Log Viewer toolbar, click the Saved Filters button.
A list opens, displaying all the saved filters for this log.
2. Select the filter that you want to edit.
3. Click the Filter Menu, button, and in the menu that appears, click Edit.
The Filter definition dialog box for the selected filter opens.
4. Modify the parameters as required, and then click Save.
The modifications are saved and the filter runs on the log.
Creating a New Filter from a Combination of Existing Filters
You can create a new filter with a new name from the combination of existing filters. You must also specify whether records are to be filtered
according to the criteria of all the combined filters (AND operation) or any of the filters (OR operation).
To combine filters into a single filter:
1. In the Log Viewer toolbar, click the Filter Menu button, and in the menu that appears, click Multi.
The Multi Filter definition dialog box opens. A name is automatically generated for the filter. A list of all the filters existing for this log
appears in the Filters section under the Filters list.
2. In Name, type a meaningful name for the filter to replace the name automatically generated by the system.
3. In the Filters section, under the Filters list, select the filter to include in the new filter, and click Add.
The selected filter is placed under the Filters to use list.
Note: You can remove a filter from the Filters to use list, by selecting it and clicking Remove.
4. Repeat Step 3 for all filters that you want to include in the combined filter.
5. In Logical operation between filters, select AND or OR.
6. If you want Analytics to regard this filter as a predefined rule, set Filter Rule Severity to Low, Medium, or High. Otherwise, leave None.
7. Click Save.
The filter is saved and run on the log.
Creating a Filter Based on an Existing Filter
You can create a filter that is similar to an existing filter, by using the existing filter as a basis for the new one, and then modifying the parameters
unique to this filter, as required. This saves you the time of defining the filter from scratch.
To create a filter based on an existing filter:
1. In the Log Viewer toolbar, click the Saved Filters button.
A list opens, displaying all the saved filters for this log.
2. Select the filter that you want to use as a basis for the new filter.
3. Click the Filter Menu button, and in the menu that appears, click Duplicate.
The Filter definition dialog box for the selected filter opens. A name is automatically generated for the duplicate of the filter: copy of
.
4. In Name, type a meaningful name for the new filter to replace the automatically generated name.
5. Modify the parameters as required, and then click Save.
The new filter is saved and the filter runs on the log.
Creating a Filter from the Negation of Existing Filter(s)
You can create a new filter that displays records resulting from the negation of an existing filter or the combination of existing filters. In the case
that the filter is created from the negation of a combination of records, you must specify whether records in the combination are to be filtered
according to the criteria of all the combined filters (AND operation) or any of the filters (OR operation). Remember that the negation of A AND B is
NOT A OR NOT B, and that the negation of A OR B is NOT A AND NOT B.
To create a filter that is the negation of existing filter(s):
1. In the Log Viewer toolbar, click the Filter Menu button, and in the menu that appears, click Negate.
The Multi Filter definition dialog box opens. A name is automatically generated for the filter. A list of all the filters existing for this log
appears in the Filters section under the Filters list.
2. In Name, type a meaningful name for the filter to replace the name automatically generated by the system.
3. In the Filters section, under the Filters list, select the filter to include in the new filter, and click Add.
The selected filter is placed under the Filters to use list.
Note: You can remove a filter from the Filters to use list, by selecting it and clicking Remove.
4. Repeat Step 3 for all filters that you want to include in the combined filter.
5. In Logical operation between filters, select AND or OR.
6. If you want Analytics to regard this filter as a predefined rule, set Filter Rule Severity to Low, Medium, or High. Otherwise, leave None.
7. Click Save.7.
The filter is saved and run on the log.
Removing a Filter
You can remove a filter from the list of saved filters of a log.
To remove a filter:
1. In the Log Viewer toolbar, click the Saved Filters button.
A list opens, displaying all the saved filters for this log.
2. Select the filter that you want to remove.
3. Click the Filter Menu button, and in the list that appears, click Remove.
The Saved Filters list is refreshed, and no longer includes the removed filter.
Running a Saved Filter
All filters that have been created for a log are saved in the system and can be quickly accessed and run from the Log Viewer toolbar.
Note: (G) preceding a filter name indicates that it is a global filter, as defined in Configuration > Global Filters (see Administrator''s Guide).
To run a saved filter:
1. In the Log Viewer toolbar, click the Saved Filters button.
A list opens, displaying all the saved filters for this log.
2. Select the filter that you want to run, and then click the Go button.
The selected filter is run on the log, and the results are displayed in the Log Viewer.
The status bar indicates the number of records that were filtered, and the filtering time.
Zooming Into a Filtered Record
Log Viewer facilitates troubleshooting to find the cause of an event, by enabling you to zoom into any filtered log record to see the log records
preceding and following it. Zooming into a filtered record displays an equal number of records preceding and following the zoomed in record, with
the total number of displayed records equivalent to the number selected in the toolbar. For example, if you select to display 25 records in the Log
Viewer, zooming into a filtered record displays 12 records before and 12 records after, and emphasizes the zoomed-in record in boldface.
To zoom into a filtered log record:
Click the Zoom-In icon at the head of the filtered record which you want to zoom into.
The Log Viewer displays the log records preceding and following the zoomed-in record, and emphasizes the selected record in boldface.
Zoom-Out and Back buttons appear in the toolbar.
Zooming Out of a Record
After zooming into a filtered record, you can zoom out of the record to return to the previous log viewer state.
To zoom out of a filtered log record:
In the Log Viewer toolbar, click the Zoom-Out or Back button.
The Log Viewer is restored to its previous state.
Resetting the Log Viewer
After performing a search, quick filter, or filter on the Log Viewer, a Reset button appears in the toolbar, enabling you to reset the Log Viewer to its
state prior to the performed action.
To reset the Log Viewer:
In the Log Viewer toolbar, click the Reset button.
The Log Viewer presents the records that were displayed prior to the action that was performed on it.
Interrupting a Process
You can interrupt any process running on the log in your Log Viewer, such as a search, quick filter, or regular filter.
To stop a process:
1. In the status bar, click the Stop button.
The process is interrupted.
Performing Actions on a Log
Actions can be performed on a log displayed in the Log Viewer from the Actions menu in the left pane and from the right-click menu of log
record(s).Actions can also be performed on any log in the system displayed in the left pane under Folders and Logs from the right-click menu of a log.
The following table lists the log actions, how they can be accessed, and a link to the topic that explains the procedures in full detail.
Log Action Perform Action From ... Explained in ...
Duplicate Right-click menu of a log under Folders and Logs in the left pane See Administrator''s Guide.
Edit Right-click menu of a log under Folders and Logs in the left pane See Administrator''s Guide.
Remove Right-click menu of a log under Folders and Logs in the left pane See Administrator''s Guide.
Export Actions menu in the left pane of the Log Viewer Exporting a Log
Right-click menu of a log under Folders and Logs in the left pane
Right-click menu of log record(s) displayed in the Log Viewer
Add Monitor Right-click menu of a log under Folders and Logs in the left pane Log Monitor
Save as Template Actions menu in the left pane of the Log Viewer Saving a Log as a Template
Right-click menu of a log under Folders and Logs in the left pane
Copy Right-click menu of selected log record(s) displayed in the Log Copying Log Record(s)
Viewer
Open a bug in Bugzilla Note: This action is available only if Bugzilla has been configured in Sending Log Bugs to a Bug Tracking
Settings > Bug Tracking Systems. System
Right-click menu of selected log record(s) displayed in the Log
Viewer
Publish an issue in JIRA Sending Log Bugs to a Bug Tracking
Note: This action is available only if JIRA has been configured in S System
ettings > Bug Tracking Systems.
Right-click menu of selected log record(s) displayed in the Log
Viewer
Find Right-click menu of a selected log record field displayed in the Log Finding Occurrences of a Log Record
Viewer Field Value
Filter Right-click menu of a selected log record field displayed in the Log Filtering Records According to Record
Viewer Field Value
Filter in..., Filter date in... Right-click menu of a selected log record field displayed in the Log Filtering a Merged Log for a Log
Viewer Viewer Record Value
Search Right-click menu of a selected log record field displayed in the Log Searching for a Log Record Field
Viewer Value in a Search Engine
Add Snapshot Right-click menu of selected log record(s) displayed in the Log Adding Log Records to a Snapshot
OR Viewer
Add to and
Select Snapshot
Customize Actions menu in the left pane of the Log Viewer See Administrator''s Guide.
Right-click menu of selected log record(s) displayed in the Log
Viewer
Print Actions menu in the left pane of the Log Viewer Printing the Log Viewer Records
Right-click menu of selected log record(s) displayed in the Log
Viewer
Actions menu in the left pane of the Log Viewer Log Monitor
Create Monitor
Selecting Multiple Log Viewer Records
You can select multiple Log Viewer records (log or snapshot records), and then right-click the records to perform an action on all the selected
records at once. You can either select a group of consecutive log records, or records displayed anywhere on the Log Viewer screen.To select multiple consecutive Log Viewer records:
In the Log Viewer, click the first record, press and hold down SHIFT, and then click the last record.
All the records between the first and last clicked record inclusive, are selected.
Note: You can also press and hold down SHIFT and then click a record, to select all records preceding and including the selected record in the
Log Viewer screen.
To select multiple Log Viewer records:
In the Log Viewer, for each record that you want to select, click Ctrl and then click the record to select.
Note: You can deselect any selected record, by clicking Ctrl and clicking the record.
Copying Log Record(s)
You can copy a single or multiple log records.
You can then paste the copied record(s) in any document, email, and more, using standard Paste methods (i.e. by right-clicking the mouse and
then selecting Paste, or by pressing Ctrl V on your keyboard).
To copy a log record:
In the Log Viewer, right-click a log record, and from the menu that appears, click Copy.
The record is copied to the clipboard.
To copy multiple log records:
In the Log Viewer, select multiple log records (see Selecting Multiple Log Viewer Records), right-click the selected records, and from the
menu that appears, click Copy.
The records are copied to the clipboard.
Note: The Copy feature is currently operational in Internet Explorer only.
Exporting a Log
XpoLog enables you to export the log displayed in the Log Viewer either from the Actions menu in the left pane or from Tools > Export Log in
the main menu. You can also export any log (not necessarily the one displayed in the Log Viewer) from the right-click menu of a log under the Fol
ders and Logs menu in the left pane. You can either export the log ''with its configuration, i.e. in the format in which it is displayed in the Log
Viewer so that it can be easily imported into the Log Viewer at a later time, or you can export the log data after transforming it into another format
(XML, CSV, Tab Delimited, Raw data, or SQL Database Table).
To export a log:
1. Open a log in the Log Viewer (see Opening a Log in the Log Viewer), and in the Log Viewer left pane, open the Actions menu, and click
Export or in the main menu, in the Tools menu item, click Export Log.
Alternately, in the Log Viewer left pane, under the Folders and Logs menu, right-click a log, and click Export.
The Export Log page opens.
2. Select one of the following options:
Export the log together with its configuration, to enable future import
Only transform the current data to the following type, and select from the dropdown list, one of the following types: XML, CSV, Tab
Delimited, Raw data, or SQL Database Table
3. Click export.
If a database account is not available for the file type selected, the System Notification page opens, with the notification: No databases
accounts available. Please define accounts first. In this case, click OK, define an account, and then perform the export.
Otherwise, the zip log status page opens with the notification: The zip was created successfully, and the File Download dialog
box also opens.In the File Download dialog box, click Save.
4. In the Save as dialog box that opens, select the Save in location and type the file name of the zipped file, and then click Save.
The file is downloaded to the selected location.
5. In the Zip log status page, click the back to log viewer link.
The Log Viewer opens, displaying the last log or snapshot that was downloaded there (and not necessarily the one that you exported).
Printing the Log Viewer Records
You can print in tabular format all or selected records currently displayed in the Log Viewer.
Printing All Records Displayed in the Log Viewer
To print all records currently displayed in the Log Viewer:
1. In the Log Viewer left pane, open the Actions menu, and select Print.
A Web page opens with the Log Viewer records arranged in tabular format. The Print options dialog box is also displayed.
2. Select the relevant print options and then click Print.
The Web page is printed to the selected destination.Printing Selected Record(s) Displayed in the Log Viewer
To print selected record(s) currently displayed in the Log Viewer:
1. Open a log or snapshot in the Log Viewer (see Opening a Log in the Log Viewer), click a single record or multiple records (see Selecting
Multiple Log Viewer Records), and right-click.
2. In the right-click menu that opens, click Print.
A Web page opens with the selected Log Viewer record(s) arranged in tabular format. The Print options dialog box is also displayed.
3. Select the relevant print options and then click Print.
The Web page is printed to the selected destination.
Finding Occurrences of a Log Record Field Value
From a selected Log Viewer record, you can quickly and easily find all the occurrences of one of its field values, in all the records of the log
displayed in the Log Viewer. This is similar to the Quick Find (see Quick Find - Searching for Text in Log Records).
To find occurrences of a field value:
In the Log Viewer, right-click the field value that you want to find in all records of the log, and in the menu that appears, click Find.
The field value is displayed in the box adjacent to the Find textbox.
The search text is highlighted in yellow in all records in the Log Viewer.
Adding Log Records to a Snapshot
You can add log records to an existing snapshot, or to a new snapshot that you create. These snapshots can assist you in your workflow and in
troubleshooting.
Adding Log Records to a New Snapshot
To add log records to a new snapshot:
1. In the Log Viewer, select a single record or multiple records (see Selecting Multiple Log Viewer Records) to place in the snapshot, and
then right-click.
If there are no snapshots defined in the system, the menu displays the Add Snapshot option. Proceed as described in Creating the First
Snapshot in the System.
If there is at least one snapshot already defined in the system, the menu displays the Select Snapshot option. Proceed as described in A
dding Snapshots to the Snapshot Library.
Adding Log Records to a Recent Snapshot
To add log records to the most recently added to snapshot:
1. In the Log Viewer, select a single record or multiple records (see Selecting Multiple Log Viewer Records) to place in the snapshot, and
then right-click.
2. From the right-click menu, click Add to .
The selected log records are added to the snapshot.
Adding Log Records to the Snapshot of Your Choice
To add log records to the snapshot of your choice:
1. In the Log Viewer, select a single record or multiple records (see Selecting Multiple Log Viewer Records), and then right-click.
2. From the right-click menu, click Select Snapshot.
The Select Snapshot dialog box is displayed.
3. From the list of snapshots, select the snapshot to which to add the log records, and then click Apply.
Note: If you have many snapshots defined in the system, you can filter the snapshot list according to the name of the snapshot. Typing
more characters in the Name Filter field of the filter, minimizes the number of selectable snapshots appearing in the list.
The selected log records are added to the snapshot. If the snapshot already contains records from another log, a Security record is
added to the snapshot.
Searching for a Log Record Field Value in a Search Engine
XpoLog enables searching for a log record field value in XpoSearch (see Searching for a Field Value in XpoSearch), or in external search
engines, such as the defaults – Google and Java Docs (see Searching for a Field Value in Google and Searching for an Error in Java Docs). The
search can also be conducted in additional external search engines, as set in Settings > Log Viewer (see Administrator''s Guide).
Searching for a Field Value in Google
You can look up an XpoLog record filed value in an external search engine, such as Google, to find out more about it.
To search for a field value in Google:
1.1. In the Log Viewer, select a log record, and right-click the field value which you want to search for.
The right-click menu opens.
2. In the right-click menu, click Search > Google.
Google opens in a separate webpage with the results of running the search for the selected field value (displayed in the Google search
box).
Searching for a Field Value in XpoSearch
You can run XpoSearch to search for occurrences of a log record field value in other records in the log, directly from the Log Viewer. You can do
this by right-clicking the field value in the record in the Log Viewer, and selecting Search > XpoSearch.
To search for a field value in other log records:
1. In the Log Viewer, select a log record, and right-click the field value which you want to search for.
The right-click menu opens.
2. In the right-click menu, select Search > XpoSearch.
XpoSearch opens in a separate webpage, with a search running for the selected field value (displayed in the search query). After the
search is completed, all log records with the searched for field value are displayed in the results.
Searching for an Error in Java Docs
You may notice a Java error in a record of your log. XpoLog enables you to look up this error in the Java Docs online documentation directly from
the Log Viewer. All you have to do is right-click the Java error in the log field that you want to look up, select Search > JDocs, and a Java Docs
webpage opens with the results of the Java Docs search engine look-up of the selected term.
To search for an error in Java Docs:
1. In the Log Viewer, select a log record, and right-click the field value (Java error), which you want to look up in Java Docs.
The right-click menu opens.
2. In the right-click menu, click Search > jDocs.
The Java Docs webpage opens, running a search for the log term in Java Docs.
Sending Log Bugs to a Bug Tracking System
XpoLog provides integration to several bug tracking systems, such as Bugzilla by Mozilla, and JIRA by Atlassian.
XpoLog enables you to submit a bug or issue from single or multiple log records displayed in the Log Viewer, directly into a bug tracking system,
provided that you have defined the bug tracking system in Settings > Bug Tracking Systems.
Opening a Bug in Bugzilla
You can investigate a bug that you see in a log, by sending single or multiple records to the Bugzilla bug tracking system.
To open a bug in Bugzilla based on log record(s):
1. In the Log Viewer, select a sinlge log record or multiple log records (see Selecting Multiple Log Viewer Records), and then right-click.
2. In the right-click menu that appears, click Open a bug in Bugzilla.
The Add a Bug notification box appears to inform you that it is loading information into Bugzilla, and then the Bugzilla Login page
appears.
3. In the Bugzilla Login page, type your Username and Password, and click Login.
Bugzilla opens, and investigates the bug loaded from the log record(s).
Publishing an Issue in JIRA
You can investigate an issue that you see in a log, by sending single or multiple log records to the JIRA issue tracking system.
To publish an issue in JIRA based on log record(s):
1. In the Log Viewer, select a sinlge log recor, or multiple log records (see Selecting Multiple Log Viewer Records), and then right-click.
2. In the right-click menu, click Publish an issue in JIRA.
The Add a Bug notification box appears to inform you that it is loading information into JIRA, and then the JIRA Login page appears.
3. In the JIRA Login page, type your Username and Password, and click Login.
JIRA opens, and investigates the bug loaded from the log record(s).
Filtering Records According to Record Field Value
You can filter all the records displayed in the Log Viewer, according to the selected field value of a record. For example, right-clicking the Process
Id field value of 1506 in Syslog, and clicking Filter, displays all records containing 1506 in the Process Id field, and highlights the value in the
filtered records.
To filter records according to a record field value:
In the Log Viewer, right-click the field value according to which you want to filter all records of the log, and in the menu that appears, click
Filter.
The filter value is displayed in the box adjacent to the GO button.The records having the selected value appear in the Log Viewer, with the filtering criteria highlighted in yellow in each record in the Log
Viewer.
Running a Composite Filter
Once you have run a filter on records in the Log Viewer, the Composite Filter feature becomes available for running a filter on two or more fields
of a record. The composite filter enables specifying whether the filtered records should contain all field values (AND) or at least one field value
(OR). For example, right-clicking the Process Name field value of popa3d in a record resulting from the filtering example above, and clicking Co
mposite Filter > AND displays all records from the log file that contain both 1506 in the Process Id and popa3d in the Process Name field,
whereas clicking Composite Filter > OR displays all records from the log file that either contain 1506 in the Process Id field or popa3d in the Pro
cess Name field.
To run a composite filter according to a record''s field values:
In the Log Viewer, right-click the field value according to which you want to run the composite filter, and in the menu that appears, click C
omposite Filter > AND or Composite Filter > OR.
The composite filter is displayed in the box adjacent to the GO button.
The records having the selected value appear in the Log Viewer, with the filtering criteria highlighted in yellow in each record in the Log
Viewer.
Filtering a Merged Log for a Log Viewer Record Value
You can filter a merged log according to the value of a date or non-date field of a record in the Log Viewer. Right-clicking a field value in a record,
displays in the right-click menu either Filter in for a non-date value, or Filter in date for a non-date value.
To filter a merged log:
1. In the Log Viewer, right-click a field value in a record, and click Filter in or Filter in date (depending on whether the right-clicked value is
a date or non-date value).
The Select Log(s) to Filter dialog box displays a logs selection tree.
2. In the logs selection tree, select the checkboxes of the logs on which to run the filter, and then click the View button.
An ‘on the fly’ merged file containing the records filtered from the selected logs according to the clicked field value, is displayed in the
Log Viewer, with the filtering criteria highlighted in yellow.
Log Monitor
In Log Monitor, XpoLog enables you to create rule(s) to monitor any log in the system. In Log Monitor, you define parameters and rules, the
scheduling of the monitor, and the alerts that are to be sent if matching events (that meet the defined rules) are detected.
These defined monitors are automatically run on the specified logs at the predefined scheduled times, with XpoLog''s advanced monitoring engine
searching in the log for events that match the defined rules. The Monitor sends alerts to notify users of events in the log that match the defined
rules. You can define that the monitor send the alerts from any of a wide range of available alerts - Email, SNMP trap, JMS message, Script, and
more. The Monitors can add to an alert that it sends, the matching log events with the detected errors.
You can also create a complex monitoring mechanism by creating multiple Monitors that are executed together and report on failures in the logs.
Please see the Administration Guide for details.
Snapshot Management
XpoLog enables you to capture a set of log records into a snapshot. These snapshots can be used during your workflow and assist
you in troubleshooting and in future analysis.
Snapshots defined in the system are global, and can include records from various logs in the system. This means that you can add log records
displayed in the Log Viewer to a new snapshot or to an already existing snapshot (see Adding Log Records to a Snapshot).
Once a snapshot is defined in the system, you can access the management options from the Tools > Snapshots menu, from the right-click menu
of a snapshot under the Snapshots menu in the left pane, or from the right-click menu of selected snapshot record(s) in the Log Viewer.
The following table lists the snapshot actions, how they can be accessed, and a link to the topic that explains the procedures in full detail.
Snapshot Perform Action From ... Explained in ...
Action
View Right-click menu of a snapshot under the Snapshots menu in the left Viewing a Snapshot
pane
From the View button in the main menu Tools > Snapshots item
Edit Right-click menu of a snapshot under the Snapshots menu in the left Editing a Snapshot Definition
pane
From the Edit button in the main menu Tools > Snapshots item
Right-click menu of selected snapshot record(s) displayed in the Log
ViewerRemove or Delet Right-click menu of a snapshot under the Snapshotsmenu in the left
e pane Removing a Snapshot
From the Delete button in the main menu Tools > Snapshots item
Export Right-click menu of a snapshot under the Snapshots menu in the left Exporting a Snapshot
pane
From the Export button in the main menu Tools > Snapshots item
Right-click menu of log record(s) displayed in the Log Viewer
Copy to Right-click menu of a snapshot under the Snapshots menu in the left Copying a Snapshot''s Contents
Clipboard pane
Copy Right-click menu of selected snapshot record(s) displayed in the Log Copying a Snapshot''s Contents
Viewer
Copy Snapshot Right-click menu of a snapshot under the Snapshotsmenu in the left Copying a Snapshot Link
Link pane
Right-click menu of selected snapshot record(s) displayed in the Log
Viewer
Open a bug in Note: This action is available only if Bugzilla has been configured in Setti Sending Snapshot Bugs to a Bug Tracking
Bugzilla ngs > Bug Tracking Systems. System
Right-click menu of a snapshot under the Snapshots menu in the left
pane
Right-click menu of selected snapshot record(s) displayed in the Log
Viewer
Publish an issue Sending Snapshot Bugs to a Bug Tracking
in JIRA Note: This action is available only if JIRA has been configured in Setting System
s > Bug Tracking Systems.
Right-click menu of a snapshot under the Snapshots menu in the left
pane
Right-click menu of selected snapshot record(s) displayed in the Log
Viewer
Find Right-click menu of a selected snapshot record field displayed in the Log Finding Occurrences of a Log Record Field
Viewer Value
Filter Right-click menu of a selected snapshot record field displayed in the Log Filtering Records According to Record Field
Viewer Value
Filter in..., Filter Right-click menu of a selected snapshot record field displayed in the Log Filtering a Merged Log for a Log Viewer
date in... Viewer Record Value
Search Right-click menu of a selected snapshot record field displayed in the Log Searching for a Snapshot Record Field Value
Viewer in a Search Engine
Remove from Right-click menu of selected snapshot record(s) displayed in the Log Removing Record(s) From a Snapshot
Snapshot Viewer
Customize Right-click menu of selected snapshot record(s) displayed in the Log See Administrator''s Guide.
Viewer
Print Right-click menu of selected snapshot record(s) displayed in the Log Printing Snapshot Record(s)
Viewer
Creating a Snapshot
You can create a snapshot by selecting at least one record in the Log Viewer pane, and then selecting either of the two options:
Add Snapshot – only appears in the menu if no snapshots exist in the system. Follow the procedure in Creating the First Snapshot in the
System.Select Snapshot– only appears in the menu if snapshots already exist in the system. From this option, you can create a new snapshot.
Follow the procedure in Adding Snapshots to the Snapshot Library.
Adding Snapshots to the Snapshot Library
Perform the following procedure to add a snapshot to the library of already existing snapshots in the system.
To add a snapshot to the library of snapshots:
1. In the Log Viewer, while pressing Ctrl on the keyboard, select the log records that you want to add to a snapshot, and then right-click.
2. From the right-click menu, select Select Snapshot.
The Select Snapshot dialog box opens.
3. In the Select Snapshot dialog box, click the Create New button.
The Create Snapshot dialog box opens.
4. In the Create Snapshot dialog box, type a Name and Description for the snapshot, and then click Save.
The newly created snapshot appears on the top of the Snapshots list in the left pane.
Creating the First Snapshot in the System
Perform the following procedure to create the first snapshot in the system.
To create the first snapshot:
1. In the Log Viewer, while pressing Ctrl on the keyboard, select the log records that you want to add to a snapshot, and then right-click.
2. From the right-click menu, select Add Snapshot.
The Create Snapshot dialog box opens.
3. In the Create Snapshot dialog box, type a Name and Description for the snapshot, and then click Save.
The newly created snapshot appears under Snapshots in the left pane.
Accessing the Snapshot Management Options
You can manage a snapshot either from the Tools > Snapshots menu, or from the right-click menu of a snapshot under the Snapshots menu in
the left pane. You can also modify the contents of a snapshot and perform other management options from the right-click menu of a snapshot
record displayed in the Log Viewer.
Note: From the Tools > Snapshots menu, you can view, edit, delete, or export a snapshot. From the Snapshots menu, you can also copy a
snapshot link or copy a snapshot''s contents.
Accessing the Snapshot Management Options from the Snapshots Menu
To access the snapshot management options:
1. In the left pane, open the Snapshots menu.
The snapshots defined in the system are displayed.
2. Right-click a snapshot.
The right-click menu displays the actions that can be performed on the snapshot: View, Edit, Remove, Export, Copy to Clipboard, and
Copy snapshot link. The actions Open a bug in Bugzilla and Publish an issue in JIRA are also available, provided that they have
been configured in Settings > Bug Tracking Systems.
Accessing the Snapshot Management Options from the Tools Menu
To access the snapshot management options:
1. In the menu bar, select Tools > Snapshots.
The Snapshots page opens, with a list of snapshots defined in the system.
2. Select a snapshot from the list.
The following buttons are enabled for performing actions on the snapshot: Edit, Delete, View, and Export.
Note: If you have many snapshots defined in the system, you can filter the snapshot list according to the name and/or description of the
snapshot. Typing more characters in the Name and Description fields of the filter, minimizes the number of selectable
snapshots appearing in the list.
Accessing the Snapshot Management Options from the Log Viewer
To access the snapshot management options from the Log Viewer:
1. In the left pane, open the Snapshots menu, and in the right-click menu of a snapshot, click View.
The snapshot is displayed in the Log Viewer.
2. Right-click a snapshot record.
The right-click menu displays the actions that can be performed on the snapshot and on the snapshot record: Copy, Find, Filter, Filter
date in, Search, Copy Snapshot Link, Remove from Snapshot, Customize, Export, and Print. The actions Open a bug in Bugzilla
and Publish an issue in JIRA are also available, provided that they have been configured in Settings > Bug Tracking Systems.
Viewing a Snapshot
You can view the contents of any snapshot in the system, either from the Tools > Snapshots menu, or from the right-click menu of a snapshot
under the Snapshots menu in the left pane. A snapshot that contains log records from more than one log, contains a Security record.Viewing Snapshot Contents
To view the contents of a snapshot:
In the left pane, under the Snapshots menu, right-click a snapshot, and in the menu that appears, click View
OR
Under the Snapshots menu, click a snapshot
OR
In the Tools > Snapshots menu, select a Snapshot and then click the View button.
The snapshot is displayed in the Log Viewer.
Editing a Snapshot Definition
You can rename a snapshot or modify its definition, either from the Tools > Snapshots menu bar item, or from the right-click menu of a snapshot
under the Snapshots menu in the left pane.
Editing a Snapshot Definition
To edit a snapshot :
1. In the left pane, under the Snapshots menu, right-click a snapshot, and in the menu that appears, click Edit
OR
In the Tools > Snapshots menu, select a Snapshot and then click the Edit button.
The Edit Snapshot dialog box opens.
2. In the Edit Snapshot dialog box, modify the Name and/or Description, and then click the Apply button.
The snapshot definition is updated.
Removing a Snapshot
You can remove from the system any snapshot, which you no longer require, either from the Tools > Snapshots menu, or from the right-click
menu of a snapshot under the Snapshots menu in the left pane.
To remove a snapshot:
1. In the left pane, under the Snapshots menu, right-click a snapshot, and in the menu that appears, click Remove
OR
In the Tools > Snapshots menu, select a Snapshot and then click the Delete button.
A delete confirmation dialog box opens.
2. Click Yes.
The snapshot is removed from the system and no longer appears under the Snapshots menu.
Exporting a Snapshot
XpoLog enables you to export any snapshot defined in XpoLog after transforming it into another format (XML, CSV, Tab Delimited, Raw data, or
SQL Database Table).
Export can be performed either from Tools > Snapshots in the main menu, from the right-click menu of a snapshot under the Snapshots menu
in the left pane, or from the right-click menu of the snapshot displayed in the Log Viewer.
To export a snapshot:
1. In the Log Viewer left pane, under Snapshots, right-click the snapshot that you want to export, and click Export
OR
In the main menu, select Tools > Snapshots, and in the dialog box that opens, select the snapshot, and then click the Export button
OR
Open the Snapshot in the Log Viewer, and in the right-click menu, click Export.
Note: It is also possible to open the snapshot in the Log Viewer, and then proceed to export it as a regular log file (see Exporting a Log).
The Export Log dialog box is displayed.
2. Select the Only transform the current data to the following type option, and then select from the dropdown list, one of the following
types: XML, CSV, Tab Delimited, Raw data, or SQL Database Table.
3. Click export.
If a database account is not available for the file type selected, the System Notification page opens, with the notification: No databases
accounts available. Please define accounts first. In this case, click OK, define an account, and then perform the export.
Otherwise, the zip log status page opens with the notification: The zip was created successfully, and the File Download dialog
box also opens.
4. In the File Download dialog box, click Save.
5. In the Save as dialog box that opens, select the Save in location and type the file name of the zipped file, and then click Save.
The file is downloaded to the selected location.
6. In the Zip log status page, click the back to log viewer link.
The Log Viewer opens, displaying the last log or snapshot that was downloaded there (and not necessarily the one that you exported).
Copying Snapshot Record(s)
You can copy all the records in a snapshot to your clipboard. You can also copy a single or multiple snapshot record(s) from the right-click menu
of the selected snapshot record(s) in the Log Viewer.
After copying, you can paste the copied snapshot contents in any document, email, and more, using standard Paste methods (i.e. by right-clickingthe mouse and then selecting Paste, or by pressing Ctrl V on your keyboard).
Copying Entire Snapshot Content
To copy a snapshot''s contents:
In the left pane, under the Snapshots menu, right-click a snapshot, and in the menu that appears, click Copy to Clipboard.
The snapshot''s records are copied to the clipboard.
Copying Selected Snapshot Record(s)
To copy a snapshot''s record(s):
1. Open a snapshot in the Log Viewer (see Viewing a Snapshot), and select a single record or multiple records (see Selecting Multiple Log
Viewer Records).
2. Right-click the record(s), and in the menu that appears, click Copy.
The selected snapshot record(s) are copied to the clipboard.
Copying a Snapshot Link
You can copy the link to any snapshot from the Snapshots menu in the left panel or from the snapshot records displayed in the Log Viewer.
You can then paste the copied snapshot link into your browser, using standard Paste methods (i.e. by right-clicking the mouse and then selecting
Paste, or by pressing Ctrl V on your keyboard), and navigate to the Log Viewer to view the snapshot.
To copy the link to a snapshot:
1. In the left panel, under Snapshots, do one of the following:
Rght-click a snapshot, and in the menu that appears, click Copy snapshot link
OR
Select a snapshot, right-click in the snapshot records that appear in the Log Viewer, and in the menu that appears, click Copy snapshot
link.
In the right-click menu of the snapshot, click Copy snapshot link.
The snapshot link is copied to your clipboard.
Removing Record(s) From a Snapshot
You can remove a single or multiple records from a snapshot.
To remove record(s) from a snapshot:
1. Open a snapshot in the Log Viewer (see Viewing a Snapshot), and select a single record or multiple records (see Selecting Multiple Log
Viewer Records) that you want to remove from the snapshot.
2. Right-click the record(s), and in the menu that appears, click Remove from Snapshot.
The selected record(s) are removed from the snapshot.
Sending Snapshot Bugs to a Bug Tracking System
XpoLog provides integration to several bug tracking systems, such as Bugzilla by Mozilla, and JIRA by Atlassian.
XpoLog enables you to submit a bug or issue from a selected snapshot, or single or multiple records in a snapshot, directly into a bug tracking
system, provided that you have defined the bug tracking system in Settings > Bug Tracking Systems.
Opening a Bug in Bugzilla
You can investigate a bug that you see in a snapshot, by sending the entire snapshot, or a single or multiple records, to the Bugzilla bug tracking
system.
To open a bug in Bugzilla based on a snapshot:
1. In the left pane, under the Snapshots menu, right-click a snapshot, and click Open a bug in Bugzilla.
The Add a Bug notification box appears to inform you that it is loading information into Bugzilla, and then the Bugzilla Login page
appears.
2. In the Bugzilla Login page, type your Username and Password, and click Login.
Bugzilla opens, and investigates the bug loaded from the snapshot.
To open a bug in Bugzilla based on snapshot record(s):
1. In the Log Viewer, select a single snapshot record, or multiple records (see Selecting Multiple Log Viewer Records), and then right-click.
2. In the right-click menu that appears, click Open a bug in Bugzilla.
The Add a Bug notification box appears to inform you that it is loading information into Bugzilla, and then the Bugzilla Login page
appears.
3. In the Bugzilla Login page, type your Username and Password, and click Login.
Bugzilla opens, and investigates the bug loaded from the snapshot record(s).Publishing an Issue in JIRA
You can investigate an issue that you see in a snapshot, by sending the entire snapshot, or a single or multiple records to the JIRA issue tracking
system.
To publish an issue in JIRA based on a snapshot:
1. In the left pane, under the Snapshots menu, right-click a snapshot, and click Publish an issue in JIRA.
The Add a Bug notification box appears to inform you that it is loading information into JIRA, and then the JIRA Login page appears.
2. In the JIRA Login page, type your Username and Password, and click Login.
JIRA opens, and investigates the issue loaded from the snapshot.
To publish an issue in JIRA based on snapshot record(s):
1. In the Log Viewer, select a snapshot record, or multiple snapshot records (see Selecting Multiple Log Viewer Records), and then
right-click.
2. In the right-click menu, click Publish an issue in JIRA.
The Add a Bug notification box appears to inform you that it is loading information into JIRA, and then the JIRA Login page appears.
3. In the JIRA Login page, type your Username and Password, and click Login.
JIRA opens, and investigates the bug loaded from the snapshot record(s).
Empty topic
Searching for a Snapshot Record Field Value in a Search Engine
Same as Searching for a Log Record Field Value in a Search Engine.
Printing Snapshot Record(s)
You can print in tabular format all or selected snapshot records currently displayed in the Log Viewer. See Printing the Log Viewer Records.
Troubleshooting XpoLog
The following helpful ways can be used to troubleshoot XpoLog in order to identify problems with the installation, configuration, and operations
status.
XpoLog System Status console – XpoLog contains an internal monitoring mechanism, which sends different alerts to the system
administrator(s) on the system''s health. There are several sections which present the current statuses of managed data, utilized storage,
memory, and more.
XpoLog Support portal – XpoLog contains a comprehensive portal that helps administrators view system logs, manage information, and
fine-tune XpoLog.
XpoLog System logs - Information regarding XpoLog logs including all system back end information as well as users activity (audit).
Common Scenarios - Information regarding some scenarios in which XpoLog is not reachable that should be checked.
Rollback XpoLog - Information regarding how to rollback XpoLog to an earlier configuration phase.
XpoLog System Status console
The XpoLog Center System Status console contains several sections, which monitor and report on the system health.
The system report does not necessarily indicates on problems that should be addressed rather than a general indication on the system behavior
at a given time - the purpose of the console is to help XpoLog Administrators review summaries with regards to XpoLog activity, memory, disk
utilization, data, network connectivity, configurations, etc.
An individual status containing these sections is available for each node in the cluster. In each section''s settings, it is possible to enable, disable,
or modify the monitoring thresholds.
To open the XpoLog Center System Status console:
1. Select the XpoLog tab. The XpoLog Manager opens.
2. In the Tools menu item, select System Status.
The XpoLog Center System Status console opens, showing the Overview information.
3. Browse through the different sections presented on the left to see the latest information. For customizing the settings, thresholds and
alerts of each section click the ''edit'' link next to each section''s title.
Overview
The Overview section presents a summary of all section statuses – data, system tasks, memory usage, disk space usage, storage response time,
network, configuration, and general system information.
In addition, it presents a summary of the latest alerts which were sent by the system to report issues that have to be
addressed.To open the Overview section of the XpoLog Center System Status console:
In the System Status console left navigation pane, click Overview.
The Overview section of the System Status console is displayed.
Data
The Data section presents data statistics that may be viewed by several dimensions: Folders, Logs, Applications, Servers and Collection Policies.
For each dimension, the Data section presents Total data, Today''s data, Last Hour''s data, Daily Average data, Daily Maximum data, last
collection execution and summary of data monitoring rules.
The monitoring rules cover several aspects related to the data management and collection of XpoLog:
- Alerts on volume of data which exceeds defined limitations in a period of time
- Alerts on sources which data was not collected from in a period of time
The monitoring rules may be applied on Folders, Individual Logs, Applications, Servers and Collection Policies. It is also possible to define
separate rules for Business Hours and Non-Business Hours.
In addition, XpoLog monitors the configuration accuracy which is applied on the collected data - parsing, time zones, etc.
Administration Section Options (presented on ''edit''):
General
Configure if this section is enabled/disabled, whether alerts should be added and execution interval to calculated
results
Properties
Configure on which status alerts will be processed
Thresholds
Configure the Thresholds that will determine the status of data processing based on the current license volume utilization
Rules
Configure alerting rules based on server/applications/logs/folders and the constraints that will be monitored for business hours and non business
hours
Data Configuration
The data section also verifies the logs configuration applied in XpoLog to ensure it is valid. For example if there are parsing problems of the log,
time zone differences, etc. it will be indicated in this section - it is recommended, but not mandatory, to review the indicated log''s configuration to
ensure it is configured properly. Customize the default configuration monitoring parameters if needed
Alerting Policy
Configure whether Email alerts and/or SNMP Traps should be sent and after an alert is sent, when status changes if a positive alert should be
sent as well. By default alerts are sent to the system administrator which can be customized in this section to specific recipient(s)
Exceptions
Presents exceptions that are marked on specific alerts, if such exists
Section Alerts:
XpoLog [XPLG_MACHINE_NAME_FULL_UID] System Alert (Data): Abnormal data behavior for [XPLG_RULE_NAME]
([XPLG_SOURCE])
XpoLog sends this alert when a data issue is detected. [XPLG_RULE_NAME] = the rule name of the data issue; [XPLG_SOURCE] = the
source of the data issue.
To open the Data section of the XpoLog Center System Status console:
In the System Status console left navigation pane, click Data.
The Data section of the System Status console is displayed
System Tasks
The System Tasks section presents statistics on all the tasks that XpoLog performs and their average execution time.
Administration Section Options (presented on ''edit''):
General
Configure if this section is enabled/disabled, whether alerts should be added and execution interval to calculated results
Properties
Configure on which status alerts will be processed
Thresholds
Configure the Thresholds that will determine the status of different activities execution time
Alerting PolicyConfigure whether Email alerts and/or SNMP Traps should be sent and after an alert is sent, when status changes if a positive alert should be
sent as well. By default alerts are sent to the system administrator which can be customized in this section to specific recipient(s)
Exceptions
Presents exceptions that are marked on specific alerts, if such exists
Section Alerts:
XpoLog [XPLG_MACHINE_NAME_FULL_UID] System Alert (Performance): Performance issue on
[XPLG_PERFORMANCE_INFO]
XpoLog sends this alert when a performance issue is detected. [XPLG_PERFORMANCE_INFO] = details about the detected problem.
Common reasons / actions: Contact XpoLog support for further investigation.
XpoLog [XPLG_MACHINE_NAME_FULL_UID] System Alert (Performance): Scan directory performance issue on directory
[XPLG_OBJECT_ID]
XpoLog sends this alert when there are issues while scanning a directory for log detection. [XPLG_OBJECT_ID] = the directory in which
XpoLog encountered problems while scanning.
Common reasons / actions: Slow connectivity to the remote machine, numerous number of files/subdirectories which cause slowness.
It is recommended to use include/exclude and subdirectories limitation in the scan directory wizard advanced settings. You are welcome
to contact XpoLog support for further investigation.
XpoLog [XPLG_MACHINE_NAME_FULL_UID] System Alert (System Tasks): The system task named [XPLG_JOB_NAME] is
running slowly
XpoLog sends this alert when a slowness of a system task is detected. [XPLG_JOB_NAME] = the name of the system task.
Common reasons / actions: Contact XpoLog support for further investigation.
To open the System Tasks section of the XpoLog Center System Status console:
In the System Status console left navigation pane, click System Tasks.
The System Tasks section of the System Status console is displayed
Listeners
The Listeners section presents general information on all the configured listeners and their statuses.
Administration Section Options (presented on ''edit''):
General
Configure if this section is enabled/disabled, whether alerts should be added and execution interval to calculated results
Properties
Configure on which status alerts will be processed
Thresholds
Configure the Thresholds that will determine the status of different activities execution time
Alerting Policy
Configure whether Email alerts and/or SNMP Traps should be sent and after an alert is sent, when status changes if a positive alert should be
sent as well. By default alerts are sent to the system administrator which can be customized in this section to specific recipient(s)
Exceptions
Presents exceptions that are marked on specific alerts, if such exists
Section Alerts:
XpoLog [XPLG_MACHINE_NAME_FULL_UID] System Alert (Listeners): Listener [XPLG_LISTENER_NAME] is down
XpoLog sends this alert when a listener is not running as expected. [XPLG_LISTENER_NAME] = the name of the listener.
Common reasons / actions: Contact XpoLog support for further investigation.
To open the Listeners section of the XpoLog Center System Status console:
In the System Status console left navigation pane, click Listeners.
The Listeners section of the System Status console is displayed
Memory Usage
The Memory Usage section presents the system''s memory utilization over time.
Administration Section Options (presented on ''edit''):
General
Configure if this section is enabled/disabled, whether alerts should be added and execution interval to calculated results
Properties
Configure on which status alerts will be processed
Thresholds
Configure the Thresholds that will determine the status based on memory consumption level
Alerting Policy
Configure whether Email alerts and/or SNMP Traps should be sent and after an alert is sent, when status changes if a positive alert should besent as well. By default alerts are sent to the system administrator which can be customized in this section to specific recipient(s)
Exceptions
Presents exceptions that are marked on specific alerts, if such exists
Section Alerts:
XpoLog [XPLG_MACHINE_NAME_FULL_UID] System Alert (Memory): High Memory [XPLG_ALERT_SUBJECT]
XpoLog sends this alert when high memory is detected and may cause XpoLog not to function well. [XPLG_ALERT_SUBJECT] = details
about the memory consumption.
Common reasons / actions: insufficient memory allocation to XpoLog. 32-bit installations can be allocated with maximum memory of 1.5
GB; 64-bit installation should be allocated 75% of the available machine’s memory. The memory allocation is done in the file XpoLog.lax
(Windows) or XpoLog.sh.lax (Linux), which is placed inside the installation directory of XpoLog. The default allocation is 1024 m
(-Xmx1024 m) and should be changed accordingly. Restart is required after the change is applied. For additional information, contact
XpoLog support.
XpoLog [XPLG_MACHINE_NAME_FULL_UID] Positive System Alert (Memory): High Memory [XPLG_ALERT_SUBJECT]
Resolved
XpoLog sends this alert in case a memory issue is resolved. It is sent after a system alert notifying about a memory problem.
[XPLG_ALERT_SUBJECT] = details about the memory consumption.
To open the Memory Usage section of the XpoLog Center System Status console:
In the System Status console left navigation pane, click Memory Usage.
The Memory Usage section of the System Status console is displayed.
Disk Space Usage
The Disk Space Usage section presents the current usage of all storage devices, which XpoLog uses (installation, data, configuration, etc.)
Administration Section Options (presented on ''edit''):
General
Configure if this section is enabled/disabled, whether alerts should be added and execution interval to calculated results
Properties
Configure on which status alerts will be processed
Thresholds
Configure the Thresholds that will determine the status of different disk space utilization
Alerting Policy
Configure whether Email alerts and/or SNMP Trapsshould be sent and after an alert is sent, when status changes if a positive alert should be sent
as well. By default alerts are sent to the system administrator which can be customized in this section to specific recipient(s)
Exceptions
Presents exceptions that are marked on specific alerts, if such exists
Section Alerts:
XpoLog [XPLG_MACHINE_NAME_FULL_UID] System Alert (Disk Space Usage): High disk space usage on
[XPLG_DISK_SPACE_PATH]
XpoLog sends this alert when a high disk space issue is detected. [XPLG_DISK_SPACE_PATH] = the storage device which ran out of
space.
Common reasons / actions: Not enough storage is allocated to XpoLog. It is very important to free space for XpoLog; otherwise, the
software will stop working.
XpoLog [XPLG_MACHINE_NAME_FULL_UID] System Alert (Disk Space Usage): Critical disk space usage on
[XPLG_DISK_SPACE_PATH]
XpoLog sends this alert when a very high disk space issue is detected and jeopardizes the system activity. [XPLG_DISK_SPACE_PATH]
= the storage device which ran out of space.
Common reasons / actions: Not enough storage is allocated to XpoLog. It is very important to free space for XpoLog ; otherwise, the
software will stop working.
XpoLog [XPLG_MACHINE_NAME_FULL_UID] Positive System Alert (Disk Space Usage): High disk space usage on
[XPLG_DISK_SPACE_PATH] resolved.
XpoLog sends this alert in case a storage issue is resolved. It is sent following a system alert notifying that the allocated storage is filled
up. [XPLG_DISK_SPACE_PATH] = the storage device that was alerted and is now resolved.
XpoLog [XPLG_MACHINE_NAME_FULL_UID] Positive System Alert (Disk Space Usage): High data storage usage on
[XPLG_DATA_VOLUME_NAME] resolved
XpoLog sends this alert in case a storage issue is resolved. It is sent following a system alert notifying that the allocated storage is filled
up. [XPLG_DATA_VOLUME_NAME] = the XpoLog storage that was alerted and is now resolved.
XpoLog [XPLG_MACHINE_NAME_FULL_UID] System Alert (Disk Space Usage): Critical data storage usage on
[XPLG_DATA_VOLUME_NAME]
XpoLog sends this alert when XpoLog is about to fill its allocated storage limitation. [XPLG_DATA_VOLUME_NAME] = the XpoLog
storage that ran out of space.
Common reasons / actions: Not enough storage is allocated to XpoLog. It is very important to free space / allocate more storage to
XpoLog; otherwise, the software will stop working when it reaches this limitation. Allocation is done in Collection Policies under
XpoLog>Settings>Log Collection Policies.
XpoLog [XPLG_MACHINE_NAME_FULL_UID] Positive System Alert (Disk Space Usage): High storage usage on
[XPLG_DISK_SPACE_PATH] resolvedXpoLog sends this alert in case a storage issue is resolved. It is sent following a system alert notifying that the allocated storage is filled
up. [XPLG_DISK_SPACE_PATH] = the storage device that was alerted and is now resolved.
To open the Disk Space Usage section of the XpoLog Center System Status console:
In the System Status console left navigation pane, click Disk Space Usage.
The Disk Space Usage section of the System Status console is displayed.
Storage Response Time
The Storage Response Time section presents the time that it takes XpoLog to reach out to its allocated storage devices.
Administration Section Options (presented on ''edit''):
General
Configure if this section is enabled/disabled, whether alerts should be added and execution interval to calculated results
Properties
Configure on which status alerts will be processed
Thresholds
Configure the Thresholds that will determine the status of different response times of the used storage(s)
Alerting Policy
Configure whether Email alerts and/or SNMP Traps should be sent and after an alert is sent, when status changes if a positive alert should be
sent as well. By default alerts are sent to the system administrator which can be customized in this section to specific recipient(s)
Exceptions
Presents exceptions that are marked on specific alerts, if such exists
Section Alerts:
XpoLog [XPLG_MACHINE_NAME_FULL_UID] System Alert (Storage Response Time): slow load time of index status of log
[XPLG_OBJECT_NAME] ([XPLG_OBJECT_ID])
XpoLog sends this alert when the loading time of an index status is taking longer than expected. You can see in the alert more details
about the expected and actual time. [XPLG_OBJECT_NAME] = log name; [XPLG_OBJECT_ID] = log ID
Common reasons / actions: slowness of the XpoLog storage.
XpoLog [XPLG_MACHINE_NAME_FULL_UID] System Alert (Storage Response Time): slow load time of log
[XPLG_OBJECT_NAME] ([XPLG_OBJECT_ID])
XpoLog sends this alert when the loading time of a log is taking longer than expected. You can see in the alert more details about the
expected and actual time. [XPLG_OBJECT_NAME] = log name; [XPLG_OBJECT_ID] = log ID
Common reasons / actions: slowness of the XpoLog storage.
XpoLog [XPLG_MACHINE_NAME_FULL_UID] System Alert (Storage Response Time): slow load time of Analytics result of log
[XPLG_OBJECT_NAME] ([XPLG_OBJECT_ID])
XpoLog sends this alert when the loading time of the Analytics results is taking longer than expected. You can see in the alert more
details about the expected and actual time. [XPLG_OBJECT_NAME] = log name; [XPLG_OBJECT_ID] = log ID
Common reasons / actions: slowness of the XpoLog storage.
XpoLog [XPLG_MACHINE_NAME_FULL_UID] System Alert (Storage Response Time): Slow response time on
[XPLG_DISK_SPACE_PATH]
XpoLog sends this alert when it encounters a slow response time from a storage device. [XPLG_DISK_SPACE_PATH] = the storage
device to/from which XpoLog encounters slowness.
Common reasons / actions: load on the storage / XpoLog machine, slow connectivity to the storage.
XpoLog [XPLG_MACHINE_NAME_FULL_UID] Positive System Alert (Storage Response Time): Slow response time on
[XPLG_DISK_SPACE_PATH] resolved
XpoLog sends this alert when slowness to a storage device is resolved. It is sent following a system alert notifying of
slowness. [XPLG_DISK_SPACE_PATH] = the storage device to/from which XpoLog encounters slowness.
To open the Storage Response Time section of the XpoLog Center System Status console:
In the System Status console left navigation pane, click Storage Response Time.
The Storage Response Time section of the System Status console is displayed
Network
The Network section presents general information on the remote data source with which XpoLog interacts, and in addition data sources which
cannot be reached or have slow connectivity, which may impact the system.
Administration Section Options (presented on ''edit''):
General
Configure if this section is enabled/disabled, whether alerts should be added and execution interval to calculated results
Properties
Configure on which status alerts will be processed
Thresholds
Configure the Thresholds that will determine the status of different connectivity time to remote sources
Alerting Policy
Configure whether Email alerts and/or SNMP Traps should be sent and after an alert is sent, when status changes if a positive alert should besent as well. By default alerts are sent to the system administrator which can be customized in this section to specific recipient(s)
Exceptions
Presents exceptions that are marked on specific alerts, if such exists
Section Alerts:
XpoLog [XPLG_MACHINE_NAME_FULL_UID] System Alert (Network): Connection problem between
[XPLG_MACHINE_NAME_UID] and [XPLG_HOST]
XpoLog sends this alert when it is unable to connect to a remote machine. [XPLG_MACHINE_NAME_UID] = the XpoLog instance which
tried to establish a connection. [XPLG_HOST] = the remote machine which XpoLog fails to connect to.
Common reasons / actions: Remote machine is not active, network problem, security constraints which block the connection.
XpoLog [XPLG_MACHINE_NAME_FULL_UID] Positive System Alert (Network): Connection Problem between
[XPLG_MACHINE_NAME_UID] and [XPLG_HOST] resolved
XpoLog sends this alert when a connection problem between XpoLog and a remote machine is resolved. It is sent following a system
alert notifying of a connectivity failure.
[XPLG_MACHINE_NAME_UID] = the XpoLog instance which tried to establish a connection. [XPLG_HOST] = the remote machine which
XpoLog fails to connect to.
XpoLog [XPLG_MACHINE_NAME_FULL_UID] System Alert (Network): Remote XpoLog connection problem to [XPLG_HOST]
XpoLog sends this alert when there is a connectivity problem to a remote XpoLog instance. [XPLG_HOST] = the remote XpoLog to which
the centralized XpoLog is unable to connect.
Common reasons / actions: Remote XpoLog is down, FW constraints which block the connection, incompatible ports usage when trying
to connect to a remote XpoLog, and usage of username/password in case the remote XpoLog is with security activated.
XpoLog [XPLG_MACHINE_NAME_FULL_UID] Positive System Alert (Network): Remote XpoLog connection problem to
[XPLG_HOST] resolved
XpoLog sends this alert when a connection problem between XpoLog and a remote XpoLog is resolved. It is sent following a system alert
notifying of the connectivity failure. [XPLG_HOST] = the remote XpoLog to which the centralized XpoLog is unable to connect.
XpoLog [XPLG_MACHINE_NAME_FULL_UID] System Alert (Network): SSH connection problem to [XPLG_HOST]
XpoLog sends this alert when there is a connectivity problem over SSH to a remote machine. [XPLG_HOST] = the remote machine which
the XpoLog fails to connect to over SSH.
Common reasons / actions: Connectivity credentials are not valid, remote machine is down, FW constraints that block the connection,
incompatible ports usage when trying to connect to a remote machine over SSH. It is recommended to open an SSH terminal directly
from the XpoLog machine to the remote machine using the exact same details, and then verify connectivity. It is also recommended to try
configuring the SSH account to use SCP instead of the default SFTP protocol (under Tools > Address Book, edit the SSH account, and
set SCP in the advanced section).
XpoLog [XPLG_MACHINE_NAME_FULL_UID] Positive System Alert (Network): SSH connection problem to [XPLG_HOST]
resolved
XpoLog sends this alert when a connection problem between XpoLog to a remote machine over SSH is resolved. It is sent following a
system alert notifying of the connectivity failure. [XPLG_HOST] = the remote machine which XpoLog was unable to connect to over SSH.
XpoLog [XPLG_MACHINE_NAME_FULL_UID] System Alert (Network): Windows Network connection problem to [XPLG_HOST]
XpoLog sends this alert when there is a connectivity problem to a remote Windows machine. [XPLG_HOST] = the remote Windows
machine which the XpoLog fails to connect to.
Common reasons / actions: Connectivity credentials are not valid, remote machine is down, FW constraints that block the connection. It
is recommended to save a service account on the XpoLog service in the Windows Service console, with the required permissions to
connect to and read the logs from a remote machine and in addition, to log in to the XpoLog machine using the same user which is used
to run the XpoLog service and open a Windows files explorer to the remote location to verify connectivity.
XpoLog [XPLG_MACHINE_NAME_FULL_UID] Positive System Alert (Network): Windows Network connection problem to
[XPLG_HOST] resolved
XpoLog sends this alert when a connection problem between XpoLog and a remote Windows machine is resolved. It is sent following a
system alert notifying of the connectivity failure. [XPLG_HOST] = the remote Windows machine which XpoLog was unable to connect to.
To open the Network section of the XpoLog Center System Status console:
In the System Status console left navigation pane, click Network.
The Network section of the System Status console is displayed.
Configuration
The Configuration section presents detailed information on all the configuration that XpoLog manages and whether
there are issues to be addressed.
Administration Section Options (presented on ''edit''):
General
Configure if this section is enabled/disabled, whether alerts should be added and execution interval to calculated
results
Properties
Configure the system configuration backup parametersAlerting Policy
Configure whether Email alerts and/or SNMP Traps should be sent and after an alert is sent, when status changes if
a positive alert should be sent as well. By default alerts are sent to the system administrator which can be
customized in this section to specific recipient(s)
Exceptions
Presents exceptions that are marked on specific alerts, if such exists
Section Alerts:
XpoLog [XPLG_MACHINE_NAME_FULL_UID] System Alert (Configuration): Failed to collect data from
[XPLG_COLLECTION_INFO]
XpoLog sends this alert when it fails to collect data from a log. [XPLG_COLLECTION_INFO] = information
about the collector that failed.
Common reasons / actions: Connectivity problem to the remote server, source file(s) does not exist.
To open the Configuration section of the XpoLog Center System Status console:
In the System Status console left navigation pane, click Configuration.
The Configuration section of the System Status console is displayed
System Information
The System Information section presents general information on the XpoLog installation, versions, license, hardware, and allocated resources.
Administration Section Options (presented on ''edit''):
General
Configure if this section is enabled/disabled, whether alerts should be added and execution interval to calculated results
Properties
N/A
Alerting Policy
Configure whether Email alerts and/or SNMP Traps should be sent and after an alert is sent, when status changes if a positive alert should be
sent as well. By default alerts are sent to the system administrator which can be customized in this section to specific recipient(s)
Exceptions
Presents exceptions that are marked on specific alerts, if such exists
Section Alerts:
XpoLog [XPLG_MACHINE_NAME_FULL_UID] System Alert (System Information): Conflict between installation and
configuration versions
XpoLog sends this alert when the installation version of XpoLog is different than the configuration version, as this may cause issues with
the software.
Common reasons / actions: Not all nodes in XpoLog cluster were updated; a configuration migration to a different XpoLog deployment.
XpoLog [XPLG_MACHINE_NAME_FULL_UID] Positive System Alert (System Information): Conflict between installation and
configuration versions resolved
XpoLog sends this alert in case a version conflict is resolved. It is sent following a system alert notifying of a version conflict.
XpoLog [XPLG_MACHINE_NAME_FULL_UID] System Alert (System Information): Current Java version is not optimal
This alert is sent in case the JAVA version is earlier than JAVA 1.7, which is the recommended version. Features that do not work on
versions earlier than JAVA 1.7 are: Hadoop HDFS Support, Disk space monitoring, and some performance optimizations which are
available only with JAVA 1.7.
Common reasons / actions: Current XpoLog version uses JAVA 1.5 or JAVA 1.6. In this case, contact XpoLog support team to update
to JAVA 1.7.
XpoLog [XPLG_MACHINE_NAME_FULL_UID] System Alert (System Information): XpoLog Center processor node has changed
to [XPLG_MACHINE_NAME]
XpoLog sends this alert when the processor node has been changed from the defined one to [XPLG_MACHINE_NAME].
[XPLG_MACHINE_NAME] = the new host which now manages as the processor node.
Common reasons / actions: Processor node is down or has lost connectivity to the allocated storage. Check the processor node to
ensure it is up and running and connected to the storage.
XpoLog [XPLG_MACHINE_NAME_FULL_UID] Positive System Alert (System Information): XpoLog Center processor node has
changed back to [XPLG_MACHINE_NAME]
XpoLog sends this alert when the processor node has been recovered and is now managing back as the processor node. This alert is
sent following an alert that notified that the processor node is not functioning as the processor.
XpoLog [XPLG_MACHINE_NAME_FULL_UID] System Alert (System Information): Current limit of open files is too low
XpoLog sends this alert on Linux installations only where the number of allowed open files is too low. This limitation is critical to XpoLog
functionality and should be changed immediately.Common reasons / actions: Post Installation Recommendations
XpoLog [XPLG_MACHINE_NAME_FULL_UID] System Alert (License): Allowed data volume exceeded
XpoLog sends this alert when it reaches the licensed volume limitation.
Common reasons / actions: XpoLog should be able to process more data that the license enables. Contact XpoLog support to expand
your license.
XpoLog [XPLG_MACHINE_NAME_FULL_UID] System Alert (License): XpoLog Center license will expire in [XPLG_DAYS] days
XpoLog sends this alert to notify that the license is about to expire. [XPLG_DAYS] = days left until the software will be deactivated.
Common reasons / actions: XpoLog license should be updated. Contact XpoLog support to extend your license.
XpoLog [XPLG_MACHINE_NAME_FULL_UID] System Alert (License): XpoLog Center license is not valid
XpoLog license is not valid and it is not possible to use the software. Contact XpoLog support to renew/activate your updated license.
XpoLog [XPLG_MACHINE_NAME_FULL_UID] System Alert (License): Allowed number of logs was reached
XpoLog sends this alert when it reaches the maximum number of allowed logs in your license. Contact XpoLog support to expand your
license.
XpoLog [XPLG_MACHINE_NAME_FULL_UID] System Alert (License): Server key is invalid
XpoLog sends this alert when the server key which is used in the license is not valid. XpoLog stops working until this issue is resolved.
Common reasons / actions: Contact XpoLog support to resolve the license issue.
XpoLog [XPLG_MACHINE_NAME_FULL_UID] System Alert (License): Server key is valid
XpoLog sends this alert in case a server key license issue is resolved. It is sent following a system alert notifying that the server key is
not valid as a verification that XpoLog continues to run as usual.
To open the System Information section of the XpoLog Center System Status console:
In the System Status console left navigation pane, click System Information.
The System Information section of the System Status console is displayed
System Alerts
The System Alerts section centralizes all the alerts that were sent by the System Status monitoring mechanism for the administrator''s reference.
To open the System Alerts section of the XpoLog Center System Status console:
In the System Status console left navigation pane, click System Alerts.
The System Alerts section of the System Status console is displayed.
System Alerts Settings
General
Configure how long to keep alerts in the system and the default time frame when loading this section
Alerting Policy
Set system administrators email address and whether to send general system alerts by email and / or SNMP traps
Exceptions
Presents exceptions that are marked on specific alerts, if such exists
Note: In order to send Email alerts - email settings must be configured and validated. In order to send SNMP traps - SNMP system account must
be configured and validated.
XpoLog Support Portal
The XpoLog support portal is accessible via the XpoLog > Settings > About menu item, by clicking the Open XpoLog Center Support Portal link.
Only users associated with the Administrators group can open the portal. The portal enables system administrators to view systems logs, change
logging level, review the general configuration that XpoLog uses, track real time activity, and manage all the data that XpoLog stores. This
information may be viewed in several sections, by selecting from the drop down list the following options: Basic Information, Activity Information,
Data Information, Site Statistics, Actions, and Advanced Settings.
Use of this portal should be permitted only to trained system administrators.
In case XpoLog runs with several cluster nodes, it is possible to view/manage each node’s information separately by selecting the required node
in the combo-box on top of the screen, or using the default option that includes all information from all nodes.
To open the XpoLog Support portal:
1. Click the XpoLog tab.
The XpoLog Manager opens.
2. Click the Settings > About menu item.
The About XpoLog Center console opens.
3. Click the Open XpoLog Center Support Portal link.
The XpoLog Support portal opens.
Basic Information
The Basic Information section of the XpoLog Support portal includes two tabs:
System Logs – Dispalys a table of all system logs, and their general information including Name, Size, Last Modified, Number of Files, and Logging Level. In this tab, Administrators can view a log, change its logging level, and export a log by selecting the log in the table,
and clicking the relevant button (View, Export, Export Light, or Change Logging Level). It is also possible to add a system log (by
clicking the Add XpoLog System Logs button), export all system logs (by clicking the Export All Information button), or export light all
information (by clicking the Export All Light button).
System Information – General information on XpoLog such as version and build, time zone, used JAVA, and more.
To open the Basic Information section:
In the XpoLog Support portal, in the header bar dropdown list, select Basic Information.
The Basic Information section opens.
Activity Information
The Activity Information section of the XpoLog Support portal includes five tabs:
Processes – Displays a table of all active processes in XpoLog (indexing, reports, dashboard analysis, monitors, etc.).
Administrators can stop a process during its operation by selecting the process, and then clicking the Stop button.
Note: Stopping a process during its operation might affect users that expect different results.
Threads – Displays a table of all active threads and their stack traces (JAVA 1.5+) in XpoLog. Administrators can interrupt a thread
during its operation by selecting it and clicking the Interrupt button.
Note: Interrupting a thread during its operation might affect users that expect different results.
SSH Connections – Displays a table of all active SSH connections in XpoLog. Administrators can terminate a connection during its
operation by selecting it and clicking the Terminate button.
Note: Terminating a connection during its operation might affect users that expect different results.
Jobs – Displays a table of all active jobs and their statuses. Administrators can stop a job during its operation by selecting the job, and
then clicking the Stop button.
Note: Stopping a job during its operation might affect users that expect different results.
HTTP Sessions – Displays an HTTP Sessions table, which presents all open clients (browsers) to XpoLog. Administrators can destroy a
session by selecting it and clicking the Destroy button.
Note: Destroying a session might affect users that expect different results.
To open the Activity Information section:
In the XpoLog Support portal, in the header bar drop down list, select Activity Information.
The Activity Information section opens.
Data Information
The Data Information section of the XpoLog Support portal includes six tabs:
Indexing – Displays a table of all logs and their index status and details. Administrators can delete a log’s index or re-index it by selecting
the log and clicking the Delete or Re-index button, respectively.
Monitors – Displays a table of all monitors and their details. Administrators can delete a monitor or reset its reference by selecting the
monitor and clicking the Delete Monitor or Reset Monitor button, respectively.
Analytics Logs – Displays an Analytics table of all logs analysis details. Administrators can delete a log analysis by selecting it, and then
clicking the Delete Data button.
Note: Stopping a job during its operation might affect users that expect different results.
Analytics Hosts – Displays an Analytics table of all hosts analysis details. Administrators can delete a host analysis by selecting it, and
then clicking the Delete Data button.
To open the Data Information section:
In the XpoLog Support portal, in the header bar drop-down list, select Data Information.
The Data Information section opens
Site Statistics
Site Statistics presents a summary of the total logs, logs volume, and index status at the Applications, Folders, and Logs levels. It is possible to
schedule the statistics report to be sent by email as HTML or CSV, periodically.
Advanced Settings
The Advanced Settings section of the XpoLog Support portal includes three tabs:
Properties – It is highly recommended NOT to change any property under Advanced Settings without consulting XpoLog
Support. Changes of properties may result in significant change in system behavior and results..
Jobs Groups –
Resource Manager – The resource manager determines the maximal allowed number of threads that can work in parallel per operation
in XpoLog.
A restart of XpoLog may be required for changes made in the Advanced Settings section to take effect .To open the Advanced Settings section:
In the XpoLog Support portal, in the header bar drop down list, select Advanced Settings.
The Advanced Settings section opens.
XpoLog System logs
XpoLog manages a set of log files which contains errors, events, system activity, users activity and more.
The logs are located under the ''log'' directory in the XpoLog allocated storage (by default under the installation directory). XpoLog uses Log4J to
log it''s activity and errors and it is possible to modify the Log4J properties if needed (located under
/conf/general/log4jXpolog.properties). Click here to see an example of how to change the default Log4J configuration.
In order to view the XpoLog system log please go to the support portal, select the log you would like to view and click the view button. It is also
possible to add the system logs to be presented in XpoLog to get access to all logs directly from the console - from the support portal, basic
section click the ''Add System Logs'' button and then refresh the browser (this functionality will not not work if the Log4J default settings are
modified).
Following is a summary of the logs that XpoLog manages:
audit
The audit logs contain detailed information on all users activity in XpoLog. XpoLog is auditing all user''s operations starting from signing in through
all other available operations in the system. XpoLog fully complies with IT regulations of auditing and storing EVERYTHING which is done in the
system by Administrators and Users - the logs containing this information may be stored for as long as needed to provide details and reports of
the usage.
system audit
The system audit logs contain detailed information on all the system''s activity. All the operation which are executed by the server side are logged -
data collection, indexing, monitoring executions, dashboards generation, etc.
XpoLog log
The XpoLog logs contain detailed information on all errors which XpoLog encounters
ssh
The ssh logs contain detailed information on all SSH related errors which XpoLog encounters while trying to establish connections, collect data or
monitor remote sources over SSH
scanner
The scanner logs contain detailed information on the the data scanning operations that XpoLog performs such as which sources are scanned,
number of logs identified and added to the system, etc.
XpoLog memory
The XpoLog memory logs contain details on the memory consumption of XpoLog
Servlet Container
The Servlet Container logs are the logs of the internal Servlet Container which runs XpoLog
Cluster Activity
The Cluster activity logs contain detailed information on all cluster related issues - in case multiple instances of XpoLog run as a cluster
Data Activity
The Data activity logs contain detailed information on all the data collection and management done by XpoLogAnt
The Ant.out file contains information on Ant related operations that are executed such as deployment of an update patch on the system
Events
The events logs contain details on all the events in XpoLog which are sent out from XpoLog to users such as monitors alerts, exporting of
dashboards / reports, tasks executions, etc.
System Alerts
The System alerts logs contain details on all the alerts which XpoLog internal monitoring mechanism is sending (see more details at XpoLog
System Status console)
XpoLog Log4J Configuration
XpoLog system logs are located under the ''log'' directory in the XpoLog allocated storage (by default under the installation directory). XpoLog uses
Log4J to log it''s activity and errors and it is possible to modify the Log4J properties if needed. The configuration is stored at the file
/conf/general/log4jXpolog.properties but since it is the default one it should not be changed since it will be overridden on system
updates.
If you wish to customize the default configuration, you may copy the file /conf/general/log4jXpolog.properties and create a new
file named /conf/general/log4jXpolog.user.properties (if this file exists, it will be used before the default one).
In general, XpoLog uses the default Apache Log4J configuration syntax. There is an appender definition for each of the system logs which defines
its pattern, rotation policy, number of files to keep, etc.
For this example, we''ll use the configuration of XpoLog Application log (xpologlog.log*):
Default Configuration:
This configuration rotates a file every 5000K and keeps 10 files in total based on the pattern specified.
#Appender data for xpolog
log4j.appender.xpolog=org.apache.log4j.RollingFileAppender
log4j.appender.xpolog.File=${xpolog.root.path}log/${xpolog.machine.name.path}xpologlog.log
log4j.appender.xpolog.MaxFileSize=5000KB
log4j.appender.xpolog.MaxBackupIndex=10
log4j.appender.xpolog.layout=org.apache.log4j.PatternLayout
log4j.appender.xpolog.layout.ConversionPattern=[%d] [%t] [%p] [%c] [%l] %m%n
Modified Configuration:
If the below replaces the default configuration, then XpoLog will keep a daily file (unlimited by size) for 30 days based on the pattern specified.
#Appender data for xpolog
log4j.appender.xpolog=xpolog.eye.util.logging.DateFormatFileAppender
log4j.appender.xpolog.File=${xpolog.root.path}log/${xpolog.machine.name.path}xpologlog.log
log4j.appender.xpolog.DatePattern=yyyy-MM-dd
log4j.appender.xpolog.MaxFilesToKeep=30
log4j.appender.xpolog.layout=org.apache.log4j.PatternLayout
log4j.appender.xpolog.layout.ConversionPattern=[%d] [%t] [%p] [%c] [%l] %m%n
Note:
Changing the Log4J configuration requires all XpoLog cluster nodes to be restarted.
Upon changing the default logging settings the view of XpoLog logs via XpoLog support portal may not work, the logs may be
defined in XpoLog using their modified configuration.
Make sure you take into consideration the required disk space when changing the default settings. By default XpoLog log
directory (per instance in cluster) may reach approximately 1GB on full capacity.
Rollback XpoLog
XpoLog Rollback is an operation which returns the configuration to an earlier state, it may be required if there was an undesired configuration
change, unintended removal of an object, etc.
XpoLog stores a daily backup of all its configuration and keeps the last 30 days backups.
Note: it is critical to stop all XpoLog nodes while performing a roll back.
In order to perform a rollback follow the steps below:
The entire configuration is managed in the EXTERNAL_CONFIGURATION_DIRECTORY, if it exists. If not, it is found in theINSTALLATION_DIRECTORY.
1. Stop XpoLog Service
2. IMPORTANT: prior to performing a rollback, it is highly recommended to store the current configuration state of XpoLog:
a. Go to the EXTERNAL_CONFIGURATION_DIRECTORY/ and rename conf directory to conf.current
b. Go to the EXTERNAL_CONFIGURATION_DIRECTORY/collection/ and rename conf directory to conf.current
3. Go to the EXTERNAL_CONFIGURATION_DIRECTORY/temp/backups/ and unzip the desired backup file from the date you wish to roll
back to.
4. A collection and conf directories will appear. Copy the unzipped collection and conf
directories to EXTERNAL_CONFIGURATION_DIRECTORY/
5. Start XpoLog Service
Common Scenarios
Problem
XpoLog services in not starting or starting and terminated immediately
Solution
Due to different reasons, it may be that some key XML configuration files were damaged which may cause such a problem. Review steps below
for more information.
Root Cause Options:
1. Service Level
Make sure that there is no XpoLog service running in the background, if there is terminate all XpoLog related process and
then try to run the service again.
2. Application Server Level
Go to XPOLOG_INSTALLATION/ServletContainer/conf/ and ensure the file tomcat-users.xml is not empty. Go to XPOLOG
_INSTALLATION/ServletContainer/conf/Catalina/localhost/ and ensure the file logeye.xml is not empty.
If any of these files are empty it mean the application server which is used to run XpoLog cannot start - contact support@xp
olog.com with this diagnosis for further steps.
3. Application Level
Go to XPOLOG _INSTALLATION/conf/general/ and ensure the file Xpolog.xml is not empty. If it is empty, retrieve a valid
Xpolog.xml file from the backup directory XPOLOG_EXTERNAL_CONF/temp/backup/LATEST_BACKUP_FILE/conf/gener
al/
replace the empty file and restart XpoLog.
4. Infrastructure Level
a. Storage
Ensure that both XPOLOG _INSTALLATION and XPOLOG_EXTERNAL_CONF directories are not out of space. If
so, allocate more space or advise XpoLog Support regarding other options to clear space.
b. Users Permissions (commonly seen on UNIX based deployments)
It may occur that there are permissions gaps between the user that ran or currently tries to run XpoLog and the
user the that owns the directories and files in XPOLOG _INSTALLATION and/or XPOLOG_EXTERNAL_CONF dir
ectories.
If that is the case, first stop XpoLog, then run the command ''chown -R XPOLOG_USER:XPOLOG_GROUP /XPOL
OG_INSTALLATION'', then run the command ''chown -R XPOLOG_USER:XPOLOG_GROUP /XPOLOG_EXTERN
AL_CONF'' and only then start XpoLog using the XPOLOG_USER
c. Ports
Ensure that XpoLog required ports are not occupied which blocks the execution of the XpoLog service. Most
common port on a standalone installation of XpoLog is its shutdown port 8095. Additional ports can be reviewed in
the System Requirements section.
It is possible to modify ports in case default ports are already occupied and cannot be available at XPOLOG_INST
ALLATION/ServletContainer/conf/server.xml.
INDEX:
XPOLOG _INSTALLATION = The absolute path of the XpoLog installation directory
XPOLOG_EXTERNAL_CONF = The absolute path of the XpoLog configuration directory. To determine this path go to XpoLog Manager >
Settings > General and check the configured path of XpoLog Configuration Directory, if it is empty then consider this parameter to be identical to
XPOLOG_INSTALLATION (see value above)
XPOLOG_USER:XPOLOG_GROUP = The user and group which should be used to run XpoLog and have the required permissions on all folders
and files in _INSTALLATION and XPOLOG_EXTERNAL_CONFGetting Assistance
From the Get Help section in the left pane of the XpoLog homepage, you can get assistance in any or all of the following ways:
Contacting XpoLog by email to get help from our support team (support@xplg.com)
Visiting our Online Knowledge Base
Reading the XpoLog User Manual
Contacting XpoLog
You can send an email to XpoLog Support with any question that you have regarding XpoLog Center.
To contact XpoLog:
1. In the XpoLog homepage, in the left pane, under the Get Help section, click Contact XpoLog.
An email opens in your default email server, addressed to support@xplg.com and with the subject: Questions regarding XpoLog Center.
2. Fill in the body of the email with your question, and send the email.
Reading the XpoLog User Manual
You can read about any topic in XpoLog Center in the XpoLog User Manual.
To open the XpoLog User Manual:
In the XpoLog homepage, in the left pane, under the Get Help section, click XpoLog Center User Manual.
The XpoLog Center User Manual opens.
Visiting our Online Knowledge Base
You can visit our online knowledge base to look up information on XpoLog Center.
To visit our online knowledge base:
In the XpoLog homepage, in the left pane, under the Get Help section, click Online knowledge base.
The XpoLog knowledge base opens.
White Papers and Brochures
Resources
Please download the following white papers, brochures and data sheets for more information about XpoLog:
XpoLog Center Summary
XpoLog Center ROI
XpoLog Center Deployment
XpoLog Center Data Sheet
XpoLog Center Comparison Sheet
XpoLog Center Augmented Search
XpoLog Center Augmented Search for Webapps
XpoLog Center Augmented Search for Apps
XpoLog Center Augmented Search for IT and Application Logs Analytics
XpoLog Center Augmented Search for Operations Analytics
XpoLog Center Augmented Search for Log Management in the Cloud
XpoLog Center Augmented Search for Software and Application Testing
XpoLog Center Architecture
">