Writing Rule Chains
Introduction
A rule chain is a series of functions which are executed in sequence to process
incoming data from any configured listeners. The functions can be used for
various purposes; for example to filter, enrich or modify data depending on
prevailing circumstances and/or to selectively dispatch data to one or more
destinations.
Each listener is configured with a rule chain using the <chain/>
element in the listener configuration file. All data collected by a listener
will be sent to the specified chain.
A rule chain resides in chain file which is a plain text file named
<chain_name>.chain. The rule chain is referred to in both the listener
configuration and in any other rule chain’s code by chain_name. Chain names
must begin with a letter and can contain upper and lower case letters, digits
and underscores.
Best practice dictates that line 1 of any chain file should be a python
source code encoding directive. After that you will almost always need
to import the rule chain API using import up; which will enable you to
reference other chains, configured dispatchers and access other chain API
features. Refer to API for writing Rule Chains for more details.
Finally you will define rules. Rules are function definitions whose name
begins with rule_:
# -*- coding: utf-8; mode: python -*-
import up
def rule_my_first_rule(event):
...
def rule_my_second_rule(event):
...
Rule Processing
To be recognised by up rule processing, a rule name must begin with
rule_, after that it may contain any upper and lower case letters,
digits and underscores. Rule names should be unique within a chain file.
Functions defined which do not begin with rule_ will be callable from any
rules but will otherwise be ignored by up rule processing.
Apart from being required to begin with rule_, rule names can be as simple
as you like e.g. rule_one, rule_two, etc. However best practice
dictates using more descriptive names, thereby easing rule maintenance in the
future.
A rule should always return a value for all possible rule execution paths.
This is achieved using a return statement and the value returned
by the rule impacts how up rule processing continues:
return None - Returning None directs up to discontinue
rule processing of the current data item. No subsequent rules in the chain
will be executed. If the current chain was invoked from another chain (and
not directly from a listener) then control is passed back to that chain;
what happens next depends on the calling rule’s logic.
return event - Returning a value that is not None will direct
up to continue rule processing by passing the returned data item
to the next rule in the chain. This should be the event which the rule
originally received, after any modifications have been made by the rule.
If there are no more rules in the chain, then up will log a
warning message as follows:
Event fell off end of chain, possibly lost: <string representation of event>
Below is an example rule chain demonstrating the concepts discussed:
# -*- coding: utf-8; mode: python -*-
"""my_chain
This is my_chain, it's just a simple one for demonstration purposes.
If it had a purpose this doc string could describe it in more
detail.
"""
# Import the up API so we can dispatch data or call other chains
import up
def rule_do_nothing(event):
"""Rule one - it does nothing"""
return event
def rule_filter_synthetic(event):
"""Rule two - filter out synthetic events
We are only interested in non-synthetic events, so if the event is
synthetic then stop processing it, otherwise pass it to the next rule.
"""
if event.synth:
# Don't process any more rules
return None
else:
return event
In the example above, our first rule rule_do_nothing is true to its name,
and simply returns the event which it received. This means that the same event
will then be passed to our next rule, rule_filter_synthetic.
The second rule checks if an event is synthetic and if so will stop
processing by returning None. Otherwise it will return the event to enable
processing to continue.
Coding Guide for Rule Chains
Indentation
The coding for rule chains is sensitive to indentation, and uses it to
denote the beginning and end of logical blocks of code, such as functions,
loops and code branches. This means it is imperative that consistent
indentation is used throughout a file.
When indenting a block of code spaces should always be used instead of tabs,
and each level of indentation should be a multiple of 4 spaces. More guidance
on indentation can be found in the
Python style guide.
Doc Strings
A plain text string appearing at the beginning of a file or function is a
special type of comment known as a documentation string or docstring, and is a
good method of giving more information on purpose and functionality.
Docstrings should use triple quotes ("""), and can be of any length. A
multi-line docstring should include a one-line summary on the first line,
followed by a blank line before the main body of the text.
More guidance on docstrings can be found in the
Python docstring style guide.
Best Practice Summary
- Source files should be saved with UTF-8 file encoding, however non-latin
characters should not be used except in string literals.
- Always use descriptive names for rules and variables, do not use single
letters or meaningless names such as var1, var2. This will make
the code significantly clearer and easier to maintain.
- Every chain and rule should begin with a docstring covering it’s intended
purpose; unless the functionality is so trivial that it cannot be explained
any more simply than in the code.
- When writing chain files using a text editor that highlights python syntax
will make spotting errors much easier and help avoid syntax errors.
- If it is necessary to maintain state in a chain, i.e. to preserve some
information from one execution to the next then this should only be done
using the Redis API, and not directly in the chain. For a guide on using the
Redis API see Maintaining State.
- Chain rules should not perform any IO intensive tasks, such as writing large
files or manipulating databases directly.
Rule chains are written using the python language, knowledge of which is not
necessary, however full documentation on python can be found at
python.org .
Incoming Events
Incoming events are up.Event instances, which basically behave like
read-only dictionaries with some extra attributes:
data: | The read-only dictionary containing the key-value pairs of the
inbound data. The inbound data is provided by the listener, see the
documentation on Listeners for more information. |
listener: | An up.ListenerProxy instance describing the listener from
which this event was received. This only has two attributes
itself: id which is the id used in listeners.xml and type
which is the type of the listener. |
peer: | The socket address of the peer, for IPv4 addresses this is
a tuple of (host, port), for IPv6 this is (host, port,
flowinfo, scopeid). For some events, like synthetic events
from a listener itself, this could be left as None. |
You can freely assign more attributes to incoming events, which is a
commonly used feature to communicate information between different
rules.
Outgoing Events
Outgoing events are similar to incoming events, with specific data and
attributes depending on their intended destination, i.e. the relevant
Dispatcher. It is necessary to use the outgoing event which corresponds to the
intended destination. The different types of outgoing event are described in
the API for writing Rule Chains. Examples here will use the ReefEvent for
demonstration, although the techniques are equally applicable to all event
types.
import up
def rule_create_reef_event(event):
"""Create a reef event, attach it to an event and return the event."""
event.out = up.ReefEvent(event)
return event
The first line of the rule creates a new ReefEvent by calling
up.ReefEvent(), passing the existing event in as a attribute to
automatically fill in some of the fields in the ReefEvent. This new event is
assigned to the out attribute of the existing event, meaning that later
rules can access both the original event data at event.data and the new
ReefEvent at event.out.
Dispatching Events
In order to pass an event to somewhere else you need to be able to get
hold of the object you want to pass it to: a dispatcher or a chain.
For this there are some functions available:
up.getdispatcher(id): |
| Returns the dispatcher instance defined with
the given id in dispatchers.xml. A dispatcher only has a
.send(event) method which can be used send an event (of the
correct type matching the dispatchers) via the dispatcher. |
up.getchain(name): |
| Returns the chain instance defined in the
name.chain file. A chain has two methods: .put(event) and
.call(event). Both will place event onto the chain and wait for the
chain to complete processing; .put() will always return None whereas
.call() returns whatever the last rule of the chain returns. |
Both of these functions are provided by the UP chain API and in order to use
them the chain file must contain the line import up before the first rule.
For more information on the UP chain API see API for writing Rule Chains
Putting this together we can expand on the previous example so it does
something useful:
# -*- coding: utf-8; mode: python -*-
"""my_chain
This is my_chain, it's just a simple one for demonstration purposes.
It will create a simple reef event for every non-synthetic event received
and send that event to a reef dispatcher called 'reef'.
"""
import up
def rule_filter_synthetic(event):
"""Filter out synthetic events."""
if event.synth:
return None
else:
return event
def rule_create_reef_event(event):
"""Create a reef event with simple attributes."""
event.out = up.ReefEvent(event)
event.out['reef_severity'] = 'info'
event.out['reef_label'] = 'A descriptive label'
return event
def rule_dispatch_to_reef(event):
"""Get the dispatcher called 'reef' and send our ReefEvent to it."""
dispatcher = up.getdispatcher('reef')
dispatcher.send(event.out)
# No more processing to do so return None to stop here.
return None
In this example our first rule is filtering out synthetic events just as it was
previously. This means that the second rule will only receive non-synthetic
events, to which it then attaches a new Reef event created using the original
event’s attributes. The second rule also sets the severity to be shown in Reef
and the label to use in Reef, a full list of the attributes available can be
found in the documentation for ReefEvent.
The third rule uses the UP chain API function up.getdispatcher() to
get a dispatcher with the id reef. For this to work a dispatcher with that
id must have been defined in the dispatcher configuration, as described in the
documentation for Dispatchers, and we assume here that it is a
Reef Dispatcher. The ReefEvent
that we have created is sent to the dispatcher, and because there is nothing
else to do in this rule chain we return None.
Special Listeners and Dispatchers
Dummy Listener
The dummy listener is a special listener which can be used to artificially
create new events in rules. This can be used to make certain rules
more logically consistent, such as triggering a new event that starts with a
normal listener entry point instead of somewhere in the middle of processing
another event.
If a dummy listener is not specified in listeners.xml then one will be
created automatically with the id dummy. For more information on the dummy
listener see Dummy Listener
Internal Listener
The internal listener is a special listener which can create events based on
events occurring inside UP. It is especially useful for notifying on errors
in rule chains, as described in Errors in Chains. For more information
on the internal listener see Internal Listener.
Because an internal listener does not have a default destination, the listener
will not exist if not specifically specified in listeners.xml.
Dummy Dispatcher
The dummy dispatcher is a special dispatcher which does not actually
dispatch anything, it simply writes an entry in the log file at DEBUG
level. It can be useful during testing of rule chains instead of
using a real dispatcher.
If a dummy dispatcher is not specified in dispatchers.xml then one will
automatically be created with the id dummy. For more information on the
dummy dispatcher see Dummy Dispatcher.
Errors in Chains
Load Errors
Some errors, such as syntax errors, will prevent a chain from being loaded. In
this case an error will be logged in the up log with a message
indicating which lines in the chain file could not be loaded properly, and if
possible a description of why. If the chain could not be loaded then none of
it’s rules will work, and the errors should be corrected as soon as possible.
Rule Errors
If an error occurs while processing a rule then the error will be logged in the
chain log file indicating which rule has failed and what event was being
processed at the time. On rule execution failure an event will be passed to the
next rule in the chain for processing. So as not to pass a potentially
corrupted event to the next rule, the event that was passed to the failing rule
is subsequently passed to the next rule in the chain. However be aware that
this may have unexpected consequences.
Rule errors are also emitted as events by the Internal Listener, when
configured. This enables any rule errors to be reported effectively (e.g. via
Reef) so that remedial action can be taken. It is good practice to set up this
kind of mechanism so as to catch configuration management issues in production.
Note for obvious reasons the :ref:listener-internal will not emit events for
rule errors that occurred while processing events from the internal listener,
but the rule error will be logged in the chain log file.
Lost Events
The last rule in a chain should always return None to indicate that the
expected end of the chain has been reached. If this does not happen then the
chain manager will log a warning that the event fell off the end of the chain
and may have been lost. If all intentional chain exit points return None
then this can indicate a potential fault in the rule chain.
Lost events can also be reported using the Internal Listener, allowing
for notifications in reef or through any other dispatcher. Lost events in
chains that are processing events from an internal listener will not be
reported by the internal listener, but will still be logged.
Maintaining State
Occasionally it can be useful to maintain state in a rule chain,
e.g. you may want to keep track of when the last heartbeat message
from a specific device was received or may want to count a certain
event. This is not obvious since the rules processing is essentially
stateless, it only has the incoming event to look operate on and
multiple events might be processes at the same time creating
consistency challenges.
For this purpose the Universal Probe has a tight integration with
Redis, a fast data store with a variety of useful data types and
operations very suitable for use by rule chains. Strictly speaking
Redis is a separate database server which must be installed
separately but it is recommended to always
install it alongside up.
The API to access the Redis data store is very simple to use, rules
first must create a client to talk with the Redis server. Once a
client instance is available it’s methods call be called to set and
retrieve persistent state.
For a reasonably complete list of operations which can be performed
with keys please see the API reference of redis.StrictRedis
in the appendix.
Creating a Redis Client
Creating a Redis client in a rule chain is normally done using the
up.redis() function. By default it will connect to a Redis
server on the local host using the default port and no password. But
if required the hostname, port and password can be passed in to
up.redis(). It is a common pattern to simply create one global
client in a rule chain which can then be used from any rule itself.
# -*- coding: utf-8; mode: python -*-
import up
redis = up.redis()
def rule_count(event):
redis.incr('counter')
return event
Advanced Client Connection
Note
It is strongly discouraged to use this advanced connection method
if the same can be achieved using up.redis(). Using the
advanced connection functionality is more likely to change in
future versions.
However in some situations the up.redis() factory function might
be too limiting for the Redis server setup deployed. In this case it
is possible to create a redis.StrictRedis client directly.
However care must be taken to not use the default connection pool
settings provided by redis: all connections must use
up.GreenRedisConnection to connect to the Redis server. An
example of creating the equivalent of up.redis() would be:
# -*- coding: utf-8; mode: python -*-
import up
import redis
pool = redis.ConnectionPool(connection_class=up.GreenRedisConnection)
client = redis.StrictRedis()
But note that for any cases for which up.redis() is sufficient
it should always be preferred over using the more advanced API. The
advanced API is only to be used if up.redis() is too limiting.
Warning
All directly created clients must use
up.GreenRedisConnection as otherwise this may
significantly affect UP’s ability to function correctly.
Redis Data Types
Redis is a key-value data store with advanced data types for storing
the values. Understanding the data types supported by Redis will give
an insight into the possibilities it provides. This section gives a
quick introduction of the main data types, however for a more thorough
description of the data types and all their operations please refer to
the Redis documentation itself as well as the redis-py
documentation. Be especially aware that this section does not
describe the full number of operations possible on the data.
Strings
Strings are the most basic data type provided and can be used to store
any sort of text or binary values. They can be set, retrieved and
deleted:
redis.set('mystring', 'text')
# True
redis.get('mystring')
# 'text'
redis.exists('mystring')
# True
redis.delete('mystring')
# 1
redis.exists('mystring')
# False
redis.get('mystring')
# None
Note however that Redis does not store Unicode, instead if you set a
Unicode string it will be encoded using the UTF-8 codec. This means
when retrieving a key it must be manually decoded back to unicode.
redis.set('my_unicode_string', 'bar £€')
# True
redis.get('my_unicode_string')
# b'bar \xc2\xa3\xe2\x82\xac'
redis.get('my_unicode_string').decode('utf-8')
# 'bar £€'
Note
Remember, all literal strings in rule chains are Unicode text by
default unless prefixed with b, e.g.: var = b'binary
value'.
Numbers
Strictly speaking Redis does not treat numbers as a separate data
type, instead to create a number simply store a string with it’s
value as text. This however means when you get a key it is returned
as a string:
redis.set('my_number', 42)
# True
redis.get('my_number')
# '42'
int(redis.get('my_number'))
# 42
Additionally Redis can use numbers as counters using the increment and
decrement operations, notice how these methods return the new value as
an integer:
redis.set('my_counter', 42)
# True
redis.incr('my_counter')
# 43
redis.incr('my_counter', 2)
# 45
redis.decr('my_counter')
# 44
redis.decr('my_counter', 2)
# 42
redis.get('my_counter')
# '42'
Sets
Sets are unordered collections of string values, a certain string
value can only appear exactly once in a set. Adding the same item
multiple times simply results in the set only having that single
item.
redis.sadd('my_set', 'a', 'b', 'c')
# 3
redis.scard('my_set') # cardinality aka no. of items
# 3
redis.sadd('my_set', 'a')
# 0
item = redis.spop('my_set') # remove and return random item
redis.sadd('my_set', item)
# 1
redis.srem('my_set', 'b')
# 1
redis.smembers('my_set')
# set(['a', 'c'])
redis.sismember('my_set', 'c')
# True
As well as adding and removing items to a set it is possible to create
intersections, differences and unions:
redis.sadd('set_two', 'a', 'b', 'd')
# 3
redis.sinter('my_set', 'set_two')
# set(['a'])
redis.sdiff('my_set', 'set_two')
# set(['c'])
redis.sunion('my_set', 'set_two')
# set(['a', 'c', 'b', 'd'])
Sorted Sets
Redis also has sorted sets which behave like normal sets but have an
additional score attached to each item in the set in the form of a
number. This provides an inherent ordering to the items in the set.
Basic operations are just like a normal set:
redis.zadd('sset', 42, 'a')
# 1
redis.zadd('sset', 43, 'b', 44, 'c')
# 2
redis.zadd('sset', a=40, d=42)
# 1
redis.zcard('sset')
# 4
redis.zincrby('sset', a, 1)
# 41.0
redis.zrem('sset', 'd')
# 1
The items in the set have a score as set and also have an order or
rank based on this score:
redis.zscore('sset', 'a'), redis.zrank('sset', 'a')
# 41.0, 0
redis.zscore('sset', 'b'), redis.zrank('sset', 'b')
# 43.0, 1
redis.zscore('sset', 'c'), redis.zrank('sset', 'c')
# 44.0, 2
redis.zrange('sset', 0, -1, withscores=True)
# [('a', 41.0), ('b', 43.0), ('c', 44.0)]
Sub-sets of based on the rank or score can be retrieved:
redis.zrange('sset', 0, 1)
# ['a', 'b']
redis.zrangebyscore('sset', 43, 50)
# ['b', 'c']
Lists
Lists are simply lists of string vales which are ordered by insertion
order. It is common to push and pop items to the head (left) or tail
(right) of lists. It is also possible to access elements by index or
use slices:
for i in range(10):
if i % 2 == 0:
redis.lpush('my_list')
else:
redis.rpsuh('my_list')
redis.lrange('my_list', 0, -1)
# ['8', '6', '4', '2', '0', '1', '3', '5', '7', '9']
redis.llen('my_list')
# 10
redis.lpop('my_list')
# '8'
redis.lindex('my_list', 0)
# '6'
redis.rpop('my_list')
# '9'
redis.ltrim('my_list', 0, 3)
# True
redis.lrange('my_list', 0, -1)
# ['6', '4', '2', '0']
Hashes
Redis hashes are maps or dictionaries of string fields and string
values which makes them suitable to represent objects. The fields in
a hash can be set, retrieved and deleted individually and the
set-if-no-exists operation is available as well:
redis.hmset('my_hash', {'field0': 'val0', 'field1': 'val1'})
# True
redis.hset('my_hash', 'field2': 'val2')
# 1
redis.hlen('my_hash')
# 3
redis.hget('my_hash', 'field0')
# 'val0'
redis.hdel('my_hash', 'field0', 'field2')
# 2
redis.hgetall('my_hash')
# {'field1': 'val1'}
redis.hsetnx('my_hash', 'field0', 'val0')
# 1
redis.hsetnx('my_hash', 'field0', 'not_set'')
# 0
redis.hget('my_hash', 'field0')
# 'val0'
Note
Hash keys where introduced in Redis 2.4 so are not available on
earlier Redis servers. Therefore we recommend to use at least
Redis 2.4.
Examples
Detect Missing Heartbeats
If a device is supposed to send regular heartbeat events it is
possible to use build a rule chain which will serve as a watchdog for
the device: heartbeat events themselves are discarded but when there
is a missing heartbeat an event is created.
This scenario assumes two listeners:
- A listener of unspecified kind which receives the heartbeat events
from the devices. Normally a heartbeat from the device is emitted
by this listener every 60 seconds for each device it receives events
from. Each event will have a “node” event field which will have a
unique identifying string for the device. This listener will be
denoted by the devsource listener ID.
- A heartbeat listener generating periodic events to
trigger rule processing. It creates an artificial event every 30
seconds. This listener will be denoted by the watchdog
listener ID.
For the remainder of this example these listeners will be referred to
by their listener ID.
The basic design is that each event from the devsource listener
sets a current timestamp in Redis while each event from the watchdog
listener will check this timestamp and if it is older then 90 seconds
will create an outgoing event.
There are several rule chains involved in this example:
- watchdog.chain
- This is the main rule chain which contains the watchdog logic. It
will receive events from both the devsource and watchdog
listeners and process them differently. In practice it does make
sense to split this into two chains as the event flows have nothing
in common really, however since there are so few rules involved it
is easier to keep this example concise by only using one chain.
- alert.chain
- This chain is not given in the example, it is assumed it can
process the alert created by the watchdog chain and create and
dispatch the appropriate outgoing event desired.
Here the watchdog.chain rule chain, the inline docstrings
should be sufficient to describe the behavour:
# -*- coding: utf-8; mode: python -*-
"""Watchdog chain
This rule chain processes events from the *devsource* listener and
from the *watchdog* listener and implements a watchdog timer for
the *devsource* events.
The *devsource* listener is expected to create an event every 60
seconds. If no event was received for 90 seconds a new event will
be created and passed to the *alert* chain.
The *watchdog* listener is expected to create an event every 30
seconds which will be used to check for missing events from the
*devsource* listener.
The heartbeat times from the devices are stored in a sorted set
under the 'devices.heartbeats' key. Each item in the set is the
event['node'] event field and the score is the POSIX timestamp of
when the event was received.
"""
import up
import time
LOG = up.getlogger()
REDIS = up.redis()
ALERT = up.getchain('alert')
def rule_filter_unknown(event):
"""Ignore and warn about unknown events"""
if event.listener.id not in ['devsource', 'watchdog']:
LOG.warn('Ignoring unknown event: %s', event)
return None
else:
return event
def rule_store_device_hb_time(event):
"""Store current time of a devsource event
If the incoming event is a watchdog event pass it on to the
next rule unmodified.
The heartbeat times are stored in Redis under the
'devices.heartbeats' key.
"""
if event.listener.id == 'watchdog':
return event
REDIS.zadd('devices.heartbeats', int(time.time()), event['node'])
return None
def rule_check_missing_hb(event):
"""Check for any missed heartbeats from devices
The incoming event is assumed to be from the watchdog listener
and will check for any devices which have not received a
heartbeat in the last 90 seconds.
"""
maxage = time.time() - 90
missing = REDIS.zrangebyscore('devices.heartbeats', 0, maxage)
for device in missing:
ALERT.put(device)
return None
It should be noted that the alerts created by the above example are
level-triggered which means the alert will keep being generated while
the heartbeat of a device is missing. If this is not desirable
suppression can be implemented as well, for one solution to this see
the next example below.
An other extension would be to also store other fields of the
heartbeat event from the devsource listener in order to provide more
context when creating the alert. This could be stored for each device
in a hash key with device:{node} as key, where node would be
filled in with the event['node'] ID.
Suppressing Repeat Escalations
In some cases it might be desirable to implement suppression in the
rule chains in order to turn level triggered events into edge
triggered events. Or in the more advanced case to only repeat the
level triggered event once in a given time span.
This example shows a rule chain which implements this suppression
logic for all events passing through it. For simplicity sake this
will assume the events arriving at this chain have a reef_key field
to identify the event. This chain will not directly dispatch, instead
placing the event on the alert chain instead which then has the
flexibility of building one or more other outgoing events which could
be sending emails etc.
The suppressions are handled by using expiring keys in Redis. So if a
new event appears it is set in the Redis server and will have an
expiration time. Whenever a subsequent event appears it will only be
forwarded if it is not present in the Redis server, which will
automatically delete it if it has expired. Note the atomic SETNX
operation which ensures behaves correctly if multiple events are
handled concurrently which would not be the case with separate GET
and SET operations.
# -*- coding: utf-8; mode: python -*-
"""Suppression chain
All events will be assumed to have the *reef_key* field which will be
used to identify it. Any events will be forwarded to the *alert*
chain at most once per hour.
"""
import up
LOG = up.logger()
REDIS = up.redis()
ALERT = up.getchain('alert')
def rule_suppress(event):
"""Suppress the event if applicable
Events will be suppressed for one hour.
"""
key = 'events.suppressed:{0[reef_key]}'.format(event)
if REDIS.setnx(key, 1):
REDIS.expire(key, 3600)
return event
else:
LOG.debug('Event suppressed: {}'.format(event))
return None
def rule_alert(event):
"""Put the event on the alert chain"""
ALERT.put(event)
return None
As an extension one could implement an additional rule which will use
e.g. the reef_type field to remove the suppression when a clear
event is received. This could be done by simple adding a new rule in
front of the rule_suppress rule.
Table Lookups
If you need to look up tabular data you can easily use dictionaries in
the chain itself. However when the tables become large it might be
interesting to store them in separate files and load them from this.
You can do this in any way Python allows you to do, as long as it is
non-blocking. However the specific case of tab-separated files has
been catered for, which allows re-using of tables used in netcool:
up.LookupTable(data=None, default=None, filename=None, numcols=1)
This will create a lookup table from any data passed in using the
data argument. If a key is unknown the value passed in to default
will be used. This does not involve any files yet, but has the
advantage of creating tables which pre-defined default values.
To read a file pass in a filename. This is a tab-separated file
which must have at least 1 key and 1 value column. If you want the
file to be interpreted as having multiple values you need to use
numcols.
A lookup table behaves (mostly) like a dictionary, so looking up a
value can be done using:
table = up.LookupTable(default='foo', filename='hosts.tab')
host = table.get(event['hdr_NNI'])
mtable = up.LookupTable(filename='customers.tab', numcols=2,
default=('other', 'low'))
cust, priority = table.get('some_key_from_an_event')
Footnotes