Skip to main content

(Ab)using Samba and inotify to implement simple menu of privilegedactions [Part 3: Basic Implementation]

Okay, so I got it working; but more as the first-generation system that I sketched out in my design notes. Ie. one trigger maps to one action, and there is no separation between objects and actions. When I set it up and gave it to my client to try out, she send me a text message with some feedback; it said "That's cool!", I'm happy. I dare say this will get a (little) more polished in subsequent deployments; it would be good to separate the configuration from the application logic.

Here is what it looks like in action; note that this is done over CIFS, so the reactivity of the interface will depend on whether Samba on the server, and you CIFS client, handles update notifications. For example, on my aging RHEL 6 GNOME 2 desktop, it does not (I have to hit refresh repeatedly); but I gather from my client's Mac, it does. You can see what it looks like from Windows from this tiny screencast I made:




In this deployment, the user is running as a particular user ('YOUR_USER') in the init script; this will set the runtime permissions of the deleventd process, and set the filesystem permissions of the triggers and logs. Make sure that the Samba rights match (using the 'force user' and 'valid users' Samba directives can be useful).

Since we're often talking about administrative commands, we use 'sudo' to run the whitelisted commands. You'll want a configuration similar to the following, changing any instances of YOUR_USER and YOUR_HOSTNAME appropriately:

Defaults:YOUR_USER          !requiretty,visiblepw,!lecture
YOUR_USER YOUR_HOSTNAME=NOPASSWD:/sbin/service httpd stop, /sbin/service tomcat6 stop, /sbin/service tomcat6 start, /sbin/service httpd start

Enough talking, here's the code. I should probably stick it up on Github or such...

#!/usr/bin/env python
import pyinotify
import os
import time
from datetime import datetime
from threading import Timer
import shlex
import subprocess
import re
import threading
trigger_directory = '/var/local/deleventd/triggers/'
log_directory = '/var/local/deleventd/logs/'
class Trigger(object):
    '''A Trigger is a file, which when deleted, causes an action to be run.
    Triggers have a state, such as 'ready', 'running', etc. which are presented as filename
    components in square brackets. Only 'ready' triggers will run when deleted, and a new
    filename will be created reflecting the current state. After a state of 'completed' or
    'failed' has been achieved, after a brief timeout it will change back to 'ready'.
    Triggers and the captured output are stored in separate directories. Triggers should
    be the only thing in the trigger directory.
    The use of triggers as a user-interface relies on inotify to learn when something
    is deleted. Note: ensure you actually delete, not move to trash.
     
    A trigger doesn't manage its execution; that is the job of the TriggerSet, which
    manages the files, runs the jobs, and collects the output.'''
    valid_statuses = ['running', 'completed', 'failed', 'ready']
    @classmethod
    def parse_filename(classname, filename):
        '''Given the filename (no directory), split out any moniker and the name.'''
        matches = re.match('^([a-zA-Z0-9][a-zA-Z0-9_]*)-\[([a-zA-Z0-9]+)\]$', filename)
        if matches is not None:             name = matches.group(1)
            status = matches.group(2)
            return {'name': name, 'status': status}         matches = re.match('^[a-zA-Z0-9]+$', filename)         if matches is not None:             status = None             name = matches.group(0)             return {'name': name, 'status': status}         else:             return None       def __init__(self, directory, name, command):         self.name = name         self.args = shlex.split(command)         self._directory = directory         self._status = 'ready'         self._lock = threading.Lock()         self._mask_level = 0         self.update_filesystem()     def __repr__(self):         return '%s(%r)' % (self.__class__, self.__dict__)     def render(self, as_state=None):         '''Return filename component of the object.'''         if as_state is not None:             return '%s-[%s]' % (self.name, as_state)         elif self._status is not None:             return '%s-[%s]' % (self.name, self._status)         else:             raise "No status stored for %r" % (self)     def status(self, status = None):         '''Get or set status. Status is one of running, completed, failed, ready.'''         self.acquire()         try:             return self._status_unsafe(status)         finally:             self.release()     def _status_unsafe(self, status = None):         if status is None:             return self._status         elif status == self._status:             return         elif status in self.valid_statuses:             self._status = status             self._update_filesystem_unsafe()         else:             raise "Requested status %r is not a valid status" % (status)     def update_filesystem(self):         '''Change the files to reflect the new state.'''         self.acquire()         try:             self._update_filesystem_unsafe()         finally:             self.release()     def _update_filesystem_unsafe(self):         '''For internal use only when the lock has already been obtained.'''         # Ignore any deletion events for this trigger         self._mask_unsafe()         for state in self.valid_statuses:             if state == self._status:                 continue             potential_filename = os.path.join(self._directory, self.render(as_state=state))             if os.path.isfile(potential_filename):                 os.remove(potential_filename)         # Now create the correct one         open(os.path.join(self._directory, self.render()), 'w').close()         # And potentially allow events to be processed         self._unmask_unsafe()     def acquire(self):         self._lock.acquire()     def release(self):         self._lock.release()     def mask(self):         self.acquire()         try:             self._mask_unsafe()         finally:             self.release()     def _mask_unsafe(self):         self._mask_level += 1     def unmask(self):         self.acquire()         try:             self._unmask_unsafe()         finally:             self.release()     def _unmask_unsafe(self):         self._mask_level -= 1         if self._mask_level < 0:             self._mask_level = 0               def masked(self):         self.acquire()         try:             return self._mask_level > 0         finally:             self.release()     def mask_and_set_status(self, status):         self.acquire()         self._mask_unsafe()         try:             self._status_unsafe(status)         finally:             self._unmask_unsafe()             self.release()   class TriggerSet(object):     '''A TriggerSet is a collection of Triggers and looks after their event handling.'''     class EventHandler(pyinotify.ProcessEvent):         def __init__(self, trigger_set):             super(TriggerSet.EventHandler, self).__init__()             self.trigger_set = trigger_set                       def process_IN_DELETE(self, event):             trigger_parse = Trigger.parse_filename(os.path.basename(event.pathname))             trigger = self.trigger_set.triggers[trigger_parse['name']]             if trigger_parse['status'] != 'ready':                 return             if trigger.masked():                 pass             else:                 trigger.mask()                 trigger.status('running')                 output_filename = "%s/%s.log" % (self.trigger_set.log_directory, trigger.name)                 output_fh = open(output_filename, 'a')                 output_fh.write('\n%s|%s|Trigger executing|%s\n' % (datetime.isoformat(datetime.now()), trigger.name, trigger.args))                 output_fh.flush()                 process = subprocess.Popen(trigger.args, close_fds=True, cwd='/', stdin=None, stdout=output_fh, stderr=subprocess.STDOUT)                 return_code = process.wait()                 output_fh.write('\n%s|%s|Trigger returned|%d\n' % (datetime.isoformat(datetime.now()), trigger.name, return_code))                 output_fh.close()                 if return_code == 0:                     trigger.status('completed')                 else:                     trigger.status('failed')                 trigger.unmask()                 Timer(5.0, trigger.mask_and_set_status, ['ready']).start()         def __repr__(self):             return '%s(%r)' % (self.__class__, self.__dict__)     def __init__(self, trigger_directory, log_directory):         '''Set up the Trigger machinery, but don't run it yet.'''         self.trigger_directory = trigger_directory         self.log_directory = log_directory         self.triggers = {}         self.mask = pyinotify.IN_DELETE         self.watch_manager = pyinotify.WatchManager()         self.handler = self.EventHandler(self)         self.notifier = pyinotify.Notifier(self.watch_manager, self.handler)         self.wdd = self.watch_manager.add_watch(self.trigger_directory, self.mask, rec=True)     def __repr__(self):         return '%s(%r)' % (self.__class__, self.__dict__)     def add_trigger(self, name, command):         '''Create a new trigger and add it to the event handler.'''         self.triggers[name] = Trigger(self.trigger_directory, name, command)     def loop(self):         '''Run forever; this is a blocking method.'''         self.notifier.loop()   ts = TriggerSet(trigger_directory, log_directory) ts.add_trigger('stop_application_stack', r''' /bin/bash -c '/usr/bin/sudo /sbin/service httpd stop; /usr/bin/sudo /sbin/service tomcat6 stop' ''') ts.add_trigger('start_application_stack', r''' /bin/bash -c '/usr/bin/sudo /sbin/service tomcat6 start; /usr/bin/sudo /sbin/service httpd start' ''') ts.loop() print 'Ending'

Here's a simple SysV init-script.

#!/bin/bash
#
# deleventd        Startup script for Deletion-Triggered Events
#
# chkconfig: - 85 15
# description: Allows you to run custom scripts when monitored
#              trigger files are deleted.
# processname: deleventd
# config: /usr/local/sbin/deleventd
#
### BEGIN INIT INFO
# Provides: deleventd
# Required-Start: $local_fs
# Required-Stop: $local_fs
# Short-Description: Start and stop Deletion-Triggered Events
# Description: Allows you to run custom scripts when monitored
#              trigger files are deleted.
### END INIT INFO
# Source function library.
. /etc/rc.d/init.d/functions
# Start in the C locale by default.
export LANG=${HTTPD_LANG-"C"}
prog=deleventd
RETVAL=0
start() {
        echo $"Starting $prog"
        nohup runuser -c /usr/local/sbin/deleventd YOUR_USER 2>&1 | logger -t deleventd &
    RETVAL=0
        return 0
}
stop() {
    echo $"Stopping $prog"
        kill $(ps -u YOUR_USER -o pid,command --no-header | awk '/python \/usr\/local\/sbin\/deleventd/ { print $1 }')
    RETVAL=0
    return 0
}
# See how we were called.
case "$1" in
  start)
    start
    ;;
  stop)
    stop
    ;;
  *)
    echo $"Usage: $prog {start|stop}"
    RETVAL=2
esac
exit $RETVAL

Comments

Popular posts from this blog

ORA-12170: TNS:Connect timeout — resolved

If you're dealing with Oracle clients, you may be familiar with the error message
ERROR ORA-12170: TNS:Connect timed out occurred I was recently asked to investigate such a problem where an application server was having trouble talking to a database server. This issue was blocking progress on a number of projects in our development environment, and our developers' agile post-it note progress note board had a red post-it saying 'Waiting for Cameron', so I thought I should promote it to the front of my rather long list of things I needed to do... it probably also helped that the problem domain was rather interesting to me, and so it ended being a late-night productivity session where I wasn't interrupted and my experimentation wouldn't disrupt others. I think my colleagues are still getting used to seeing email from me at the wee hours of the morning.

This can masquerade as a number of other error strings as well. Here's what you might see in the sqlnet.log f…

Getting MySQL server to run with SSL

I needed to get an old version of MySQL server running with SSL. Thankfully, that support has been there for a long time, although on my previous try I found it rather frustrating and gave it over for some other job that needed doing.

If securing client connections to a database server is a non-negotiable requirement, I would suggest that MySQL is perhaps a poor-fit and other options, such as PostgreSQL -- according to common web-consensus and my interactions with developers would suggest -- should be first considered. While MySQL can do SSL connections, it does so in a rather poor way that leaves much to be desired.

UPDATED 2014-04-28 for MySQL 5.0 (on ancient Debian Etch).

Here is the fast guide to getting SSL on MySQL server. I'm doing this on a Debian 7 ("Wheezy") server. To complete things, I'll test connectivity from a 5.1 client as well as a reasonably up-to-date MySQL Workbench 5.2 CE, plus a Python 2.6 client; just to see what sort of pain awaits.

UPDATE: 2014-0…

From DNS Packet Capture to analysis in Kibana

UPDATE June 2015: Forget this post, just head for the Beats component for ElasticSearch. Beats is based on PacketBeat (the same people). That said, I haven't used it yet.

If you're trying to get analytics on DNS traffic on a busy or potentially overloaded DNS server, then you really don't want to enable query logging. You'd be better off getting data from a traffic capture. If you're capturing this on the DNS server, ensure the capture file doesn't flood the disk or degrade performance overmuch (here I'm capturing it on a separate partition, and running it at a reduced priority).

# nice tcpdump -p -nn -i eth0 -s0 -w /spare/dns.pcap port domain

Great, so now you've got a lot of packets (set's say at least a million, which is a reasonably short capture). Despite being short, that is still a massive pain to work with in Wireshark, and Wireshark is not the best tool for faceting the message stream so you can can look for patterns (eg. to find relationshi…