The logging query language processing is based on a data flow model. Each query can
reference one or more logs, and produces a table dataset as a result. The query
language provides several operators for searching, filtering, and aggregating
structured and unstructured logs.
A logging query includes the following components:
To begin your search, you must first define the set of logs you want to search. You
can choose to search specific log objects, log groups, or compartments. You can mix
and match as many logs as you need. The search scope is defined using the following
pattern:
search <log_stream> (,? <log_stream>)*
The query language fetches log entries from the scope you provide, and constructs a
log stream that you can filter, aggregate, and visualize.
All fields in log streams are case-sensitive. Although actual logs have indexed
fields in lower case only, you can also create new fields in the query with mixed
case:
search "..."
| select event as EventName
Fields are in JSON notation, therefore, special characters must be in quotes.
A pipe (|) applies an operator on the left side to a stream expression on the right
side. The pipe expression is a stream expression.
The operator on the right side of a pipe must consume only one stream (for example,
aggregate operations, filters).
The left side becomes the "current stream" for the right side expression, making all
fields in the current stream available according to short names. For
example:
A tabular operator creates or modifies a log stream by filtering out or changing log
entries. Also refer to BNF syntax notation. The following are tabular operators:
Some
example comparisons with numbers and Boolean field comparisons are the
following:
| data.statusCode = 200
| data.isPar
You
can perform a full text search by specifying a filter on the entire content of
the log. A search on logContent returns any log line where a
value matches your string. This functionality supports wildcards. For
example:
search "application"
| where logContent = 'ERROR' -- returns log lines with a value matching "ERROR"
search "application"
| where logContent = '*ERROR*' -- returns log lines with a value containing "ERROR"
top
Fetches only a specified
number of rows from the current log stream, sorted based on some
expression.
<top> := top [0-9]+ by <expr>
Examples:
top 3 by datetime
top 3 by *
top 3 by (a + b)
A number of rows must be a constant positive integer, and a sorting
expression must be
provided.
Sorts the current log
stream by the specified columns, in either ascending (default) or descending
order. The operator uses the "DESC" and "ASC" keywords to specify the type of
the order. The default sort order is
asc.
Processes the current
log stream by filtering out all duplicates by specified columns. If more than
one column is specified, all columns have to be delimited by
commas.
The last
parameter is the time interval in days, hours, minutes, or
seconds.
time_format(datetime,
<format>)
Format a time to a string
concat (<axpr>, <expr>)
upper (<expr>)
lower (<expr>)
substr (<expr>, [0-9]+ (, [0-9]+)?)
The second argument is
the start index, while the third argument is optional, namely, how many
characters to take.
isnull (<expr>)
isnotnull (<expr>)
Aggregate Operators 🔗
count
Calculates a number of rows in the current
log
stream:
search "application"
| count
>>
{"count": 6}
summarize
Groups the
current log stream by the specified columns and time interval, and aggregates
using named expressions. If grouping columns are not specified,
summarize aggregates over the whole
stream.
search "application"
| summarize count(impact) as impact by level, rounddown(datetime, '1m') as timestamp
Special Columns 🔗
logContent
logContent is a
special column which represents the text of the whole original message. For
example:
search "application"
| where logContent = '*ERROR*' -- returns log lines with a value containing "ERROR"
Comments 🔗
Both single line and multi-line comments are supported, for example:
search "application"
| count -- this is a single line comment
/* this is a
multi-line
comment
*/
Identifiers 🔗
Identifiers are the names of all available entities in the query. An identifier can
reference a field in the current log stream, or a parameter defined in the beginning
of the query. Identifiers have the following format:
name: \$?[a-zA-Z_][a-zA-Z_0-9]*
For example: level, app_severity,
$level.
The quoted form allows special symbols in the names (except double quotes):
name: "[^"]+"
For example: "opc-request-id", "-level".
All parameter references start with a dollar sign ($), for example:
$level.
Literals 🔗
Type
Examples
string
'hello', 'world\'!'
wildcard pattern
"acc-*"
integer
-1, 0, +200
float
1.2, 0.0001, 1.2e10
array
[1,2,3,4], []
interval
3h, 2m
nullable
null
Functions 🔗
Scalar functions are the following:
isnull(expr1)
concat(expr1, ...)
Aggregate functions are the following:
sum(expr1)
avg(expr1)
min(expr1)
max(expr1)
count(): Counts a number of rows.
count(expr): Counts a number of non-null
expr values.
first(expr1)
last(expr1)
System parameters 🔗
All parameters with the prefex "query." are reserved. The following
parameters are supported: