Tuesday, July 14, 2015

If Not Smarter then Smaller!

I've long maintained a list of "things you only buy once, in life." On that list are items like: a pool, a boat, a ferret, a hot tub and an RV. We call them RVs (recreational vehicles); others call them
motorhomes.  While the story on how we acquired one is interesting, it's not the topic of this post. This post is another "remind me how I did that?" post.



This post is about doors.  Making doors.  Making frame and panel doors. Making cope and stick doors.  Because I had to make another door this weekend and I do it so infrequently, I forget the steps involved.



First a little context on how door  making relates to the motorhome. On our vehicle, the original bathroom door was a three panel, tambor style door. Tambor style doors are those doors you'd find on an old roll-top desk.  The tambor doors that work well are made of many (dozens) of small, thin strips of wood.  Our RV door, as mentioned, had three panels and didn't work well at all.

I had made some repairs but finally reached the conclusion that diminishing returns had long since set in and it was simply time to replace it with a real door. A door door. A door door with a handle and a hinge. A door door that swings.

Without further ado, I'm writing down the steps I take to make a door.

Step 0 - The Router Bits


Frame and panel router bit sets make the work much easier. I currently own two bit sets: one from Infinity Tools, their Shaker Bit Set  and the other from MLCS Woodworking - their Katana Matched Rail and Stile Bit Set - Ogee router bit set.



The Katana bits were the closest match to the profile of the existing cabinets in the RV so they got the nod.


These bits are also called "cope and stick" bits by some other woodworkers.  One bit, called the stick bit, makes the stick cuts. The other bit makes the cope cut. This other bit is called - wait for it - the cope bit.

Remembering which one is which isn't easy for me.

Step 1 - The Spreadsheet

The math behind a door is detailed enough that it's worth using a computer to figure things out. There are programs out there that are dedicated to door making, but the the math isn't complicated enough to warrant spending the money. It can be done in a spreadsheet.  I made one and I keep it on on Google Docs.

Rail and Stile Calculations Sheet



Here's the output of the spreadsheet for this project. The finished door will be 22 1/4" wide by 73" tall. It'll be a two-panel door.  The rails and stiles of the existing cabinets are 2 1/8" wide, so we'll use the same width for the door.

The bottom rail will be 4" tall to give the door some visual weight.








For the math to all work out, it's important to know the depth of the panel groove that'll be cut by the bit. For my Katana Ogee bit set, it's a 3/16" deep groove.

It'll change from bit set to bit set.

And, it does affect the size of the stock milled, so be sure and understand the depth of the groove cut by your bit set.

 

 

Step 2 - Stock Prep

Not much to say here that you don't already know. Mill your stock (joint, plane) to 3/4" thickness. Rough cut rails and stiles long - as the router bit can get a bit wonky on the ends. It's easier to trim off the wonkiness AFTER the rails and stiles are routed.  Rip your rails and stiles to the finished width (the 2 1/8" width for top and middle rails and the stiles. 4" for the bottom rail).


Note that I also have plenty of scrap pieces left over. They'll come in handy for test cuts and setup.

 

Step 3 - Stick It

Make the stick cuts first; the cope cuts are last. Mount the stick bit in the router table. This is the bit with the bearing on the top.  If you're lucky enough to have made, or bought, a setup block then use it to set the height of the bit.



Bring the router fence in and make the fence and the bearing flush.


Using scrap, make some test cuts.  When satisfied, make all of the stick cuts on the rails and stiles. Remember, if you care, that the face side goes DOWN on the router table.


Step 4 - Cut Rails to Exact...

Cut them to the finished width (or length as I realize I'm overloading the use of "width"). Before you cope the ends, the rails must now be cut down to their final widths.  In my case that was 18 3/4" and I use my Incra Miter 1000 sled to ensure all rails are cut to the same dimension.

Step 5 - Cope Cut Rail Ends

I use a home-made sled when coping the rail ends. The sled is 1/2" hardboard with a 90* strip of scrap as the fence and scrap as the backer material to prevent blowouts.



Mount the cope bit (the bit with the bearing in the middle). Set the height, remembering the height of the sled.  As before, bring the router fence in and make the fence and the bearing flush.




Make a test cut on some scrap and assemble the rail and stile -- face up. If the rail is proud of (higher than) the stile, lower the bit and retest.


Carefully make the cope cuts.  Keep the stock square and tight to the fence.
Go slow. Use a piece of scrap as a backer.




Step 6 - Check the Fit








Step 7 - Make Your Panels

I had some leftover Quarter Sawn White Oak (QSWO) veneer, so I used that for the panels. While a J-Roller works, a vacuum press will save you some sweat and shoulder pain.


This veneer was also "PSA enabled" - peel and stick. Which means an adhesive had already been applied to the backing.  When using other veneer, I've used Plastic Resin glue with great results.

[ No, I'm not veneering the back sides. Sue me.  ;) ]

Step 8 - Trim Panels

I recommend double checking everything - trim some scrap and test fit before ripping your panels to width. Once the width is OK, cross cut the panels to final length.












And test the fit. In my case, the veneer added enough thickness to the panel to make the fit a bit too tight.

So a dado blade and a thin cut on the backside removed a few thousandths of material to make the fit 'just right'.


You want the panel to slide into the groove with no binding. And you want a tight, rattle free fit.




Step 9 - Test Assemble

Check everything. Don't glue yet - finishing is much easier before the door is assembled.



Step 10 - Finish


When you first get into woodworking, you'll hear others grumble about how they hate finishing or detest sanding.  It's true. Finishing sucks. And, it's as much work as the wood working (woodworking work?) part.

Matching colors is usually difficult. So, I've learned two things: one, the stain colors on the cans won't come close to the color you'll get in the end and  two, keep the lights dim and you won't notice.

Lowes was close by, they had new product on the shelf that advertised faster drying time so I gave it a shot. I went with a quart of Rust-Oleum's Ultimate Wood Stain - Wheat color. Interestingly enough, I don't see that shade, Wheat, on any website.


In any event, the panels, rails and stiles were stained. Then, when dry, got two coats of General Finishes Arm-R-Seal Gloss.  With a 400 grit light polish between coats to knock off the nibs.

Two final coats of General Finishes Arm-R-Seal Satin to knock the shine down and we're ready to assemble..

Step 11 - Assemble

Glue, clamp and square.

Voila.  Finito.





Since I expect the door to get some abuse, 23g pins are put through the tenons

Step 12 - Installation

Hang with piano hinge. Still needs hardware.

Sunday, February 1, 2015

Change Up - Moving Averages in 'C' 

Recently my inexpensive, archaic and inaccurate weather station started tossing out spurious values at a rate that grew from mildly annoying to "really pizzing me off!"


After putting a little bit of research in on the types of errors, it appeared that some sort of "flyer" control algorithm would help. The easiest of these is the Moving Average (also called Running Average, Rolling Average, Sliding Average, Sliding Window Average).


And I needed it to be coded in 'C'.  After 15 minutes of Googling, I decided to roll my own.


And here's the result.



Moving Average - Header File - Function Prototypes.


extern  MovingAverage_t     *MovingAverage_CreateMovingAverage( int maxElements );
extern  int   MovingAverage_AddValue( MovingAverage_t *maPtr, double value );
extern  double MovingAverage_GetAverage( MovingAverage_t *maPtr );
extern  int   MovingAverage_GetElementCount( MovingAverage_t *maPtr );
extern  int   MovingAverage_Reset( MovingAverage_t *maPtr );
extern  int   MovingAverage_DestroyMovingAverage( MovingAverage_t *maPtr );
extern  int   MovingAverage_Resize( MovingAverage_t *maPtr, int newMaxElements );
extern  int   MovingAverage_GetValues( MovingAverage_t *maPtr, double returnValues[] );
extern  void  MovingAverage_DebugDump( FILE *fp, MovingAverage_t *maPtr );


The first call to CreateMovingAverage takes a number of elements to hang onto. Pass in '5' and the moving average keeps 5 values around.  It returns a pointer to a structure of type "MovingAverage_t".  You hang onto that structure and use it in the subsequent calls.  

Here's the structure definition:

typedef struct MovingAverageStruct {
 BufferElement_t     *head;
 int    maxElements;     // total size of the ring buffer (eg. 10))


 int    elementCount;    // how many elements (values) are in the buffer. Will range from [0 to maxElements ]


 int    index;   // next spot to use for a new value in the buffer [ 0.. maxE ]]


 double     runningSum;    // we keep a running total
 double     currentAverage;  
} MovingAverage_t;



The values to be averaged are stored as a Linked List. Here's the definition for BufferElement_t:

typedef struct BufferElement {
    double                value;  // the value
    struct BufferElement  *next;  // needed for singly- or doubly-linked lists 

   
    unsigned    long     sequence;   // debugging only...
} BufferElement_t;


There are a number of ways to store the values, typically a Ring Buffer is used. And I could have implemented the Ring Buffer using an array, but I wanted the ability to easily grow the buffer size at run time, if necessary.  So some sort of dynamic structure is called for, and a Linked List seemed like a reasonable first start.


Troy Hanson's UTLIST.H

A small diversion to recognize a fellow by the name of Troy Hanson (see his wordpress blog).  

I've used Troy's code before and it works wonderfully.  His code is here.  

The most amazing thing about Troy's work is that he's done it all using C Preprocessor Macros! 


That means there's no runtime dependency on a library, shared or static. Just include his header file, follow his coding conventions and you've got a fully functional Linked List as easy as 1, 2, 3!

Troy is, by far and away, the best preprocessor guru I've run across.  Thank you Troy, for your great work.

Back to the code, the only thing Troy's macros need for a singly linked list is a variable named "next" that points to the next list item.  Don't change the variable name -- it must be called "next".

Here's the complete header file:

/*
 * File:   movingaverage.h
 * Author: pconroy
 *
 * Created on January 19, 2015, 8:46 AM
 */

#ifndef MOVINGAVERAGE_H
#define    MOVINGAVERAGE_H

#ifdef    __cplusplus
extern "C" {
#endif

//
// Going to use Troy Hanson's Excellent LinkedList Macros
#include <stdio.h>
#include <UtHash/utlist.h>

#ifndef     FALSE
# define    FALSE       0
# define    TRUE        (!FALSE)
#endif

#define     MASUCCESS   0
#define     MAFAILURE   1


/*
 * You can use any structure with these macros, as long as the structure contains a next pointer.
 * If you want to make a doubly-linked list, the element also needs to have a prev pointer.
 */

typedef struct BufferElement {
    double                  value;
    struct BufferElement    *next;       /* needed for singly- or doubly-linked lists */
   
    unsigned    long        sequence;   // debugging
} BufferElement_t;



typedef struct MovingAverageStruct {
    BufferElement_t     *head;
    int                 maxElements;     // total size of the ring buffer (eg. 10))
    int                 elementCount;    // how many elements (values) are in the buffer. Will range from [0 to maxElements ]
    int                 index;           // next spot to use for a new value in the buffer [ 0.. maxE ]]
    double              runningSum;      // we keep a running total
    double              currentAverage;
} MovingAverage_t;


extern  MovingAverage_t     *MovingAverage_CreateMovingAverage( int maxElements );
extern  int             MovingAverage_AddValue( MovingAverage_t *maPtr, double value );
extern  double          MovingAverage_GetAverage( MovingAverage_t *maPtr );
extern  int             MovingAverage_GetElementCount( MovingAverage_t *maPtr );
extern  int             MovingAverage_Reset( MovingAverage_t *maPtr );
extern  int             MovingAverage_DestroyMovingAverage( MovingAverage_t *maPtr );
extern  int             MovingAverage_Resize( MovingAverage_t *maPtr, int newMaxElements );
extern  int             MovingAverage_GetValues( MovingAverage_t *maPtr, double returnValues[] );
extern  void            MovingAverage_DebugDump( FILE *fp, MovingAverage_t *maPtr );


#ifdef    __cplusplus
}
#endif

#endif    /* MOVINGAVERAGE_H */


Let's quickly revisit the functions exposed by the code. We already covered off on CreateMovingAverage().  

AddValue() is called when you have a new value (of type double) to add to the running average. If the buffer is not full, (element count < max elements) then the value is appended to the linked list. If the buffer is full, then the oldest element is removed and the value appended.  The buffer always stays, at most, max elements full.

GetAverage() returns the current moving average in the buffer. There's an assert() in the code to make sure the element count isn't zero.  Don't call it until there's at least one value in the list.

GetElementCount() returns the number of values in the list. It'll range from 0 to maxElements passed into the create call.

Reset() deletes the linked list of values and resets all counters to zero. It does NOT resize the list - maxElements stays the same as when the Create() call was made.

DestroyMovingAverage() deletes the list, deletes any memory allocated and sets the maxElements value to zero.  Pair it with the Create() call and call it when you're done using the moving average.

Resize() allows you to grow the ring buffer. You can make the buffer larger, store more values. Currently you can't shrink it. (Until I convince myself of the need and a way to keep 'n' values when shrinking, it isn't implemented.)

GetValues() lets you pass in a pointer to an array of doubles. The values are pulled out of the list and passed back.  For debugging.

DebugDump() dumps the contents of the structure to a file for logging and debugging.


That's the header file. Let's move onto the source code:


/*
 * File:   movingaverage.c
 * Author: pconroy
 *
 * Created on January 15, 2015, 2:50 PM
 */

#include <assert.h>
#include <stdio.h>
#include <stdlib.h>

#include "movingaverage.h"


static  unsigned    long    globalSequenceCounter = 0UL;

//------------------------------------------------------------------------------
//
//  Call this function first to obtain a pointer to a Moving Average structure
//  The buffer will be ready to take new values after this call
MovingAverage_t    *MovingAverage_CreateMovingAverage (int maxElements)
{
    MovingAverage_t    *maPtr = malloc( sizeof( MovingAverage_t ));
    if (maPtr != (MovingAverage_t *) 0) {
        maPtr->head = (BufferElement_t *) 0;
        maPtr->elementCount = 0;
        maPtr->index = 0;
        maPtr->runningSum = 0.0;
        maPtr->currentAverage = 0.0;
       
        maPtr->maxElements = maxElements;               // size of the ring buffer
    }
   
    return maPtr;
}

//------------------------------------------------------------------------------
//
//  Add a new value (of type double) to the list. This will force a recalculation
//  of the average
int     MovingAverage_AddValue (MovingAverage_t *maPtr, double value)
{
    if (maPtr != (MovingAverage_t *) 0) {
        BufferElement_t     *newElement = malloc( sizeof( BufferElement_t ));
        if (newElement != (BufferElement_t *) 0 ) {
            newElement->value = value;
            newElement->sequence = globalSequenceCounter;
            globalSequenceCounter += 1UL;

            //
            // If there's still room in the buffer, if we haven't wrapped around yet
            if (maPtr->elementCount < maPtr->maxElements) {
                LL_APPEND( maPtr->head, newElement );
                maPtr->elementCount += 1;
                maPtr->index += 1;
 

            } else {
                //
                // Buffer is full, time to wrap around and over write the value at "index"
                //   Where will this new element go? Here:
                int newElementSpot = (maPtr->index % maPtr->maxElements);
               
                // First - advance to the element and get value that's already there
                BufferElement_t     *oldElement = (maPtr->head);
                for (int i = 0 ; i < newElementSpot; i ++)
                    oldElement = oldElement->next;
               
                //
                // Remove the value from the running total
                maPtr->runningSum -= oldElement->value;
               
                //
                // Swap out the old element with the new
                LL_REPLACE_ELEM( maPtr->head, oldElement, newElement );
               
                //
                //  Update the index, but it needs to wrap around
                maPtr->index = ((maPtr->index + 1) % maPtr->maxElements);
               
                //
                // Free up the memory used by the old, discarded element
                free( oldElement );
            }
           
            maPtr->runningSum += value;
            assert( maPtr->elementCount != 0 );
            maPtr->currentAverage = ( maPtr->runningSum / maPtr->elementCount );
                       
        } else {
            return MAFAILURE;
        }
    }
   
    return MASUCCESS;
}

//------------------------------------------------------------------------------
//
//  Return the average of all of the values in the buffer
double  MovingAverage_GetAverage (MovingAverage_t *maPtr)
{
    assert( maPtr->elementCount != 0 );
    return maPtr->currentAverage;
}

//------------------------------------------------------------------------------
//
//  Return how many values are currently in the buffer
int  MovingAverage_GetElementCount (MovingAverage_t *maPtr)
{
    return maPtr->elementCount;
}

//------------------------------------------------------------------------------
//
//  Clear the contents of the list of values, free the memory, reset all sums and counts
//  to zero.  Do NOT free the head of the list, do NOT resize the list
int  MovingAverage_Reset (MovingAverage_t *maPtr)
{
    BufferElement_t *ptr1;
    BufferElement_t *ptr2;
   
    LL_FOREACH_SAFE( maPtr->head, ptr1, ptr2 ) {
        free( ptr1 );
    }
   
    maPtr->elementCount = 0;
    maPtr->currentAverage = 0.0;
    maPtr->index = 0;
    maPtr->runningSum = 0.0;
   
    return MASUCCESS;
}

//------------------------------------------------------------------------------
//
//  Call Reset() to clear and free memory, then free the head ptr and set the list size to
//  zero.  The list should not be reused after this
int     MovingAverage_DestroyMovingAverage (MovingAverage_t *maPtr)
{
    MovingAverage_Reset( maPtr );

    free( maPtr->head );
    maPtr->maxElements = 0;
   
    return MASUCCESS;
}

// -----------------------------------------------------------------------------
int     MovingAverage_Resize (MovingAverage_t *maPtr, int newMaxElements)
{
    //
    // For now - we're only going to allow larger, not smaller
    assert( newMaxElements > maPtr->maxElements );
   
   
    //
    //  If we're increasing the size of the list - that's easy to do!
    if (newMaxElements > maPtr->maxElements) {
        maPtr->maxElements = newMaxElements;
        //
        // We're Not going to adjust the current index
       
    } else {
        //
        // Shrinking is harder! -- Save the "newMaxElement" values, and drop the rest
        //
    }
   
    return MASUCCESS;
}

//------------------------------------------------------------------------------
int     MovingAverage_GetValues (MovingAverage_t *maPtr, double returnValues[])
{
    BufferElement_t *ptr;
    int             index = 0;
   
    LL_FOREACH( maPtr->head, ptr ) {
        returnValues[ index ] = ptr->value;
        index += 1;
    }
   
    return maPtr->elementCount;
}

//------------------------------------------------------------------------------
void    MovingAverage_DebugDump (FILE *fp, MovingAverage_t *maPtr)
{
    fprintf( fp, "Current Structure Values\n" );
    fprintf( fp, "  Element Count: %d\n", maPtr->elementCount );
    fprintf( fp, "  Max Elements : %d\n", maPtr->maxElements );
    fprintf( fp, "  Index        : %d\n", maPtr->index );
    fprintf( fp, "  Average      : %f\n", maPtr->currentAverage );
    fprintf( fp, "  Running Sum  : %f\n", maPtr->runningSum );

    BufferElement_t *ptr;
   
    LL_FOREACH( maPtr->head, ptr ) {
        fprintf( fp, "      Seq: %lu  Value: %f\n", ptr->sequence, ptr->value );   
    }
    fprintf( fp, "Global Sequence Counter: %lu\n", globalSequenceCounter );
}



#if 0
int main(int argc, char** argv)
{
    MovingAverage_t    *aMovingAverage = MovingAverage_CreateMovingAverage( 3 );
    if (aMovingAverage != (MovingAverage_t *) 0) {
        for (int i = 0; i < 10; i += 1) {
            (void) MovingAverage_AddValue( aMovingAverage, (double) i );
        }
    }
    MovingAverage_DebugDump( aMovingAverage );
   
   
    //
    // Make it bigger
    printf( "Make the buffer bigger.\n" );
    MovingAverage_Resize( aMovingAverage, 20 );
    for (int i = 100; i < 110; i += 1) {
        (void) MovingAverage_AddValue( aMovingAverage, (double) i );
    }
   
    printf( "After making the buffer larger.\n" );
    MovingAverage_DebugDump( aMovingAverage );
 
   
    MovingAverage_DestroyMovingAverage( aMovingAverage );
    return (EXIT_SUCCESS);
}
#endif

That's it.  In my weather station code, I created a function called "checkAndAddValue()" for each value emitted by the weather station:

static
int     checkAndAddValue (weatherStats_t  *wsPtr, int statType, double value, double percentage)


It's job is to check the value against the moving average.  I pass in a percentage variance threshold.  Values that are outside the threshold are discarded.

For example, the interior temperature check/add function looks like:

 if (!checkAndAddValue_ITemp( wsPtr, datum->indoor.temp, 15.0 )) {
        Logger_LogWarning( "Reading's indoor temperature appears to be in error. Will be discarded. Value [%f]\n", datum->indoor.temp );
        return FALSE;
    }



 The interior temperature reading coming in, has to be within 15% of the moving average or it's discarded.

The percentage is increased for outside temperature values since those can move a bit faster:

    if (!checkAndAddValue_OTemp( wsPtr, datum->outdoor.temp, 30.0 )) {
        Logger_LogWarning( "Reading's outdoor temperature appears to be in error. Will be discarded. Value [%f]\n", datum->outdoor.temp );
        return FALSE;
    }



Maybe the next post will be about the simple logging framework I use.


Thursday, December 25, 2014

(#18) More Esper Tidbits


As I continue to use and explore the capabilities of Esper, I'm going to put down some of the answers to questions that I ran into.

The documentation in Esper is certainly comprehensive but, to me, it's not as approachable for a novice, as I think it could be.

So I'm coding, playing and using the Netbeans debugger to see how things work.

Note, take all of the code snippets with a grain of salt. I'm not claiming them to be correct nor proper nor efficient.  They're just examples of my discoveries.

Recall that I'm using Esper to be the CEP engine for a Home Automation project. My sensors around the house emit MQTT packets periodically. There are door sensors, motion sensors, weather sensors, a Nest Thermostat, a Caller ID box and so on. 

Esper's job is to take the MQTT packet data from the sensors and try to draw conclusions about what's going on in the house.  Did someone just leave? Did someone just come home?  Did my spouse just crank up the thermostat to some hellish level and single-handily impact, adversely, the Natural Gas stores in Colorado?


While the Esper documentation is great about providing examples on the EPL, I've also been struggling with what the handler code should do?  The handler, "listener" in Esper parlance, is the code that's invoked when the EPL finds a match.


Esper Listener Examples


Q: For the EPL "select count(*)..." what does the listener code look like?

A: The attribute to get is named "count(*) and is of type long

For example:
String  subQuery1 = "SELECT COUNT(*) FROM WS2308WeatherStatusEvent wse WHERE wse.oTemp < cast(50.0, double)";

The Listener:
    @Override
    public void update(EventBean[] newData, EventBean[] oldData)
    {
         Long number = (Long) newData[ 0 ].get( "count(*)" );

An easier way would probably have been to modify the select clause:
String  subQuery1 = "SELECT COUNT(*) as numEvents FROM WS2308WeatherStatusEvent wse WHERE wse.oTemp < cast(50.0, double)";

The cast is there because I didn't just default all floating point types in Java to double. I used floats too.  Without the cast, Esper complained, at compile time, about the type mismatch.


Q: The Solution Patterns document, under the section "How do I detect N events in X seconds" has the example:  "select count(*), window(*) from MyEvent(somefield = 10).win:time(3 min) having count(*) >= 5 output first every 3 minutes".   

If I change it to reflect my needs:

anEPLQuery = "SELECT COUNT(*), WINDOW(*) FROM NestStatusEvent( hvacStatus = 'HEATING' ).win:length(5) HAVING COUNT(*) >= 5 OUTPUT FIRST EVERY 5 MINUTES";  

then what does the Listener code need to do?

A: Here's what's working for me.  To serve as a tutorial, the snippet takes no shortcuts. Some of the statements could be combined.

class FurnaceRunningListener implements UpdateListener
{

  @Override
  public void update(EventBean[] newData, EventBean[] oldData)
  {
    logger.info( "FurnaceRunningListener - update called" );
       
    try {
      //
      // When the EPL is simple "SELECT * FROM EventObject" - you 

      // use the getUnderlying() method. For example, 
      // NestStatusEvent  eventOne = 
      // (NestStatusEvent) newData[ 0 ].getUnderlying();
           
      //
      // For this, More complex EPL
      // "SELECT COUNT(*), WINDOW(*) FROM 

      //     NestStatusEvent( hvacStatus = 'HEATING' ).win:time( 5 min )
      //     HAVING COUNT(*) >= 5 OUTPUT FIRST EVERY 5 MINUTES";
      //  Now EventBean[] is coming in with a hashmap
      EventType et = newData[ 0 ].getEventType();
      String  propertyName1 = et.getPropertyNames()[ 0 ]; // "count(*)"
      String  propertyName2 = et.getPropertyNames()[ 1 ]; // "window(*)
      

      // propertyName1 is "count(*)" - get the value
      long  countValue = ((Long) (newData[0].get( propertyName1 ))).longValue();

      
      // propertyName2 is "window(*)" - the events are coming in as an array
      NestStatusEvent[] triggeringEvents = (NestStatusEvent[]) newData[0].get( propertyName2 );


      // How many events? Our EPL asked for (at least) 5
      int                 numEvents = triggeringEvents.length;
 

      // so the first event object is at [0], the last at [4]           
      NestStatusEvent     firstEvent = triggeringEvents[ 0 ];
      NestStatusEvent     lastEvent  = triggeringEvents[ numEvents - 1 ];
      //


And go on from there.

BPM - Getting it off my Chest: Microservices and APIs 

I mentioned in post 17, that I had come back from a conference on Application Architecture.  Hosted by a national vendor with a good reputation for quality.  I'm expecting a call from the account rep asking me how I enjoyed the conference.  I think what I'm simply going to say is that "the coffee was good."


And leave it at that.


My first criticism was simply that the conference was more "bread than meat" -- the topics were only covered in the most cursory manner.  They were all the proverbial 50,000' fly-by.


I get it - the sessions are 45 minutes; you cannot get into too much detail in 45 minutes.  But still, that wasn't long enough for me.  I don't think I'll go back to this conference because of this alone.

But there was something else at this conference. What's a polite synonym for pandering? By the way, BPM, for this post, does not stand for Business Process Management.


Buzzwords de' decade.

Microservices and APIs.  Ugh. Every damned slide, poster, badge, ornament and coffee cup holder was emblazoned with Microservices and APIs.

Really? Really?  Really?  Give me a break.


So just what is an API anyway?

This one was puzzling.  We've been defining, designing, implementing and using APIs for over 40 years.  What was so different about the 2014 definition to warrant all this hype?

What's in shared library or DLL?  An API.  How did the client / server applications we created in the 80s interact? Through APIs.  Apollo Network Computing, OSF DCE, CORBA, SOA -- all laden with APIs.

So what's the difference between a service and an API anyway?  Only one analyst tackled this one head on and answered "Nothing." Thank you!

I'd venture a guess that that vast majority of attendees didn't know or didn't care about this.  I did.  Is it just the malcontent of an old curmudgeon?  I don't think so. I walked out feeling like I was being hyped.

I expect this kind of hype from a vendor.  But I thought it quite beneath the analysts at this conference. 

I found it distasteful.


Microservices

Goodness I don't know where to start.  Maybe I'll start off with a couple of opinions.

Opinion Number 1 - There's nothing new here

Service based design and programming is as old as dirt.  

Ok - as old as computer dirt.  I stumbled across the early trails, then called Network Computing, in the mid 80's with such offerings as Sun's RPC/XDR, and NFS products.  

A few years later and the design approach had gathered enough of a following to warrant OSF's DCE offering.  So it's at least 35 years old, and probably older.


Again - this isn't a rant about the old farts not getting credit for something.  This is a warning. Services, micro or macro, are not new. 

The service-based shoals of application development are littered with the bodies of those who've tried and failed to make these things work.  Which leads to Opinion Number 2.


Opinion Number 2 - There were 54 problems we ran into when designing applications using services.  Microservices takes that number down to 53.

When creating non-trivial applications using services we ran into:
  • language independent service discovery
  • transaction management
  • auditing / logging
  • orchestration / choreography
  • common context (enterprise object models)
  • contractual obligations / promises
  • testing
  • granularity
  • security
  • error handling
And 43 others.  Just Google "SOA implementation problems" for a taste of what we ran into.
With microservices there's a half-hearted attempt to tackle one: granularity.  Make 'em small.  How small?  Can't tell you exactly.  Make 'em small enough but not too small. Make 'em just the right size.

Sweet.  Thanks for the help.
You want to make a real impact - tackle error handling. That should be simple enough.


Bull Hockey

One analyst tossed up a slide showing that "applications of the future will be an assembly of microservices."  It was difficult to remain seated.  I've seen that slide, in one form or another, for 20+ years.




Ok - show me.
Show me one non-trivial application constructed from an assembly of microservices.
Show me one, non-trivial, business relevant, customer facing application:

  • assembled from 6 or more microservices
  • with microservices created by 2+ vendors

Sigh.
Never mind...


So, what?  Give up?

When I was a kid, about 7, I came to the conclusion that we had invented swearing. My generation had invented cursing. I didn't hear it at home (much) but there was this 5th grader, Jeremy, that could make a sailor blush.

Since that was my first real exposure to the art of cussing, I assumed that Jeremy, the 5th grader, had started it all.

Around 2005, give or take, I had a young gun from Microsoft come into work for a presentation (on BizTalk, I think) and state categorically that Microsoft had invented distributed computing.


If this is your first exposure to service based development, you might think you're blazing uncharted territory too.   That's fine.    Please let me repeat: this isn't the malcontent of an old curmudgeon who feels like we didn't get our share of credit. I personally had nothing to do with the science, the design of service based design.  I just hopped on the bandwagon.


If I have seen further it is by standing on ye sholders of Giants.

Do your homework.  Don't make the same mistakes or run needlessly into the same obstacles we did.


Don't give up.  Keep working on it.

But I'll close with a suggestion that you keep a George Santayana quote in mind about remembering the past.



Good luck.
May you have more success that we did.

Tuesday, December 9, 2014

(#17) Well That was Unexpected - CEP Legitimized?


I'm at a Nerd Conference in Vegas. There's about 2000 nerds out here.  Mostly IT nerds. IT nerds from big companies. IT nerds from big companies talking about problems they have, that I used to have, that I no longer have.

And that makes me smile.

Much to my surprise, Event Processing made the bill.  Not as high on the charts as, oh say, Microservices and APIs. But there were three or so sessions dedicated to event processing in business.  The TLA CEP (Complex Event Processing) was scattered on a few slides too.

And that reminded me I hadn't come back to close off the situation presented in post #16.     In that post, I said that I had not been getting the results I had expected, nor wanted from Esper's CEP Engine.  When I posted my question to the Esper discussion groups, there were several responses saying I should abandon my EPL query and switch over to Match Recognize.


The Match Game

I don't know the history of this syntax. You'll find references to "match recognize SQL" in the documentation for Oracle 12c DBMS.  Whether Esper borrowed it or if it's a extension that Oracle came up with or if it's an emerging standard -- I don't know.  What I do know is that it worked.




Here's the new query using the Esper's Match Recognize syntax:

   anEPLQuery = "SELECT * FROM HHBAlarmEvent " +
            " MATCH_RECOGNIZE (" +
            " MEASURES A as a, B as b, C as c" +
            " PATTERN (A B C) " +
            " DEFINE " +
            "  A as A.macAddress = '000000B357' and A.deviceStatus = 'OPEN', "  +
            "  B as B.macAddress = '0000012467' and B.deviceStatus = 'OPEN', "  +
            "  C as C.macAddress = '000007AAAF' and C.deviceStatus = 'MOTION'"  +
                        ")";
   EPStatement arrivalStatement_EPL4 = cepAdm.createEPL( anEPLQuery );

      


Stare at it a bit, and you can kind of tease it out.  The Pattern we're going to recognize is (A B C).  The pattern syntax is regex based.  I'm looking for Event A, then Event B, then Event C. The Measures A as a, B as b means that my listener method can pull out the events using "a", "b" and "c".  Like this:

        HHBAlarmEvent  eventOne = (HHBAlarmEvent) newData[ 0 ].get( "a" );
        HHBAlarmEvent  eventTwo = (HHBAlarmEvent) newData[ 0 ].get( "b" );
        HHBAlarmEvent  eventThree = (HHBAlarmEvent) newData[ 0 ].get( "c" );
 

 And the DEFINE part tells the Engine how to recognize a match.  It defines the condition that triggers a match.  In my case, Event A is when the Garage Door Sensor (MAC Address '0B357') goes to device status OPEN.  Then Event B is when the Door to the Garage (MAC Address '12467' also goes to status OPEN.  And finally, when those two are followed by Event C, the Motion Detector (MAC Address '7AAAF) detects MOTION.

I'll post the results of this change from Esper EPL (A -> B -> C) to Match Recognize later, but it worked!

I started getting the results I thought I should get.

The syntax for Match Recognize looks a bit harder to master, but it's going to be worth mastering.

 

Sunday, October 12, 2014

(#16) Writing the Wrongs - Esper Patterns 

I made a mistake.  After watching my Esper code run for several weeks I was consistently seeing triggers on event patterns that I didn't expect.  It's not that the triggers were wrong -- I'm not asserting a bug in Esper -- it's that I wasn't seeing results that I expected to see.


The ABC's 

Let's simplify what I'm after. If you recall, I'm looking for a Garage Door Open event, a Door Open event and then a Motion event as a pattern that I'm interested in.  

Simplify this to event A, event B then event C, where event A is the Garage Door Open Event, event B is the Door Open event and event C is the motion event.

I'm interested in A, then B, then C.  In Esper's EPL parlance, this is noted as "A -> B -> C".  Which is useful notation to adopt.


The Real World Intrudes

What was happening, in my house, occasionally was this:

  1. Garage Door Opens (e.g. A-1)
  2. Garage Door closes (don't care about this event)
  3. Time passes
  4. Garage Door Opens again (e.g. A-2)
  5. Door Opens (B-1)
  6. Motion Sensor triggers (C-1)
In my psuedo EPL notation I was seeing A1 -> A2 -> B1 -> C1.

What was I expecting?  I was expecting that my Esper would trigger on the last three: A2 -> B1 -> C1.   And that's not what I was seeing.  In my update listener code, I saw that the three events that were coming in were A1, B1, and C1.  I was getting the first A event, not the last as I had thought I'd see.

The Fault, Dear Brutus

I've just finished up an hour of playing with Esper and patterns. As you'd surmise, the results I'm getting are because of the EPL I used.  So let's explore a bit more on the EPL varations I tried and the results I got.


The test cases I used were all variations on the arrivals of A, B and C events with delays in between.  Test case 3, to pick one, is noted as: "A1, A2, d10 A3, B1, C1".



In English this would be: 
  • send A event (A1)
  • send A event (A2)
  • Delay 10 seconds
  • send A event (A3)
  • send B event (B1)
  • send C event (C1)

I created 4 Esper patterns in EPL and ran the application.

EPL-1
SELECT * FROM PATTERN
[ every 
 (eventOne = HHBAlarmEvent( macAddress = '000000B357', deviceStatus = 'OPEN' ) 
-> eventTwo = HHBAlarmEvent( macAddress = '0000012467', deviceStatus = 'OPEN' ) 
-> eventThree = HHBAlarmEvent( macAddress = '000007AAAF', deviceStatus = 'MOTION' )) where timer:within( 2 minutes )];

EPL-1 can be noted in my shorthand as [ every ( A-> B -> C) where timer:within ]


EPL-2
SELECT * FROM PATTERN
(eventOne = HHBAlarmEvent( macAddress = '000000B357', deviceStatus = 'OPEN' ) 
-> eventTwo = HHBAlarmEvent( macAddress = '0000012467', deviceStatus = 'OPEN' ) 
-> eventThree = HHBAlarmEvent( macAddress = '000007AAAF', deviceStatus = 'MOTION' )) where timer:within( 2 minutes )];

EPL-2 can be noted in my shorthand as [  ( A-> B -> C) where timer:within ]



EPL-3
SELECT * FROM PATTERN

( every eventOne = HHBAlarmEvent( macAddress = '000000B357', deviceStatus = 'OPEN' ) 
-> eventTwo = HHBAlarmEvent( macAddress = '0000012467', deviceStatus = 'OPEN' ) 
-> eventThree = HHBAlarmEvent( macAddress = '000007AAAF', deviceStatus = 'MOTION' )) where timer:within( 2 minutes )];



EPL-3 can be noted in my shorthand as [  (  every A-> B -> C) where timer:within ]



EPL-4
SELECT * FROM PATTERN

every eventOne = HHBAlarmEvent( macAddress = '000000B357', deviceStatus = 'OPEN' ) 
-> eventTwo = HHBAlarmEvent( macAddress = '0000012467', deviceStatus = 'OPEN' ) 
-> eventThree = HHBAlarmEvent( macAddress = '000007AAAF', deviceStatus = 'MOTION' ) where timer:within( 2 minutes )];

EPL-1 can be noted in my shorthand as [ every A-> B -> C where timer:within ]


You can see I'm moving the "every" and the parenthesis around and see what results I find.


The Test Cases

With the four EPL patterns ready, I modified the code to read patterns from a file, to add real-world timings. Then I created six test cases:



(Recall "d10" is my short hand for delay (wait) 10 seconds, d123 means delay 123 seconds.)

All four EPL patterns will be run against the test cases.  Next, I thought about the results that I wanted.  For example on test case 3, what I'd like to get is a trigger on the event sequence A3 -> B1 -> C1.

So I added a column to indicate what I was hoping to see from Esper:



Now, we could certainly disagree over what's to be expected or desired. Your needs / expectations could be different from mine.  For example in test case 5, you might possibly want to get notified on A1->B1-> C1 or A1->B3-> C1 or A3->B3-> C3.  What you expect is up to you.  I've just put down what I think I'd want to meet my needs.

The Results - Grouped by EPL

Test Case 1

Recall EPL-1 is [ every ( A-> B -> C) where timer:within ]. Let's look at the results:



Green denotes that I got what I was wanting to get.  Red means that I got something that I didn't want. Please, please, please note - red does not mean the results are wrong. Red means that the EPL I used didn't give me the results I was hoping for.

Think about Test Case 1, where I was hoping to get A3->B1->C1 and instead I got triggered on A1->B1->C1.  It's obviously correct but it's also reasonable to get that response from Esper.  So the chore becomes "what EPL will produce the results I'm after?"

Let me beat this horse a bit more - Esper responded. The EPL worked, my updateListerner method was called.  But when I examined the three events that triggered the listener object, sometimes I got the event objects I wanted (green) and sometimes I did not (red).

Let's keep going


Test Case 2

Recall EPL-2 removes the "every" keyword: [ ( A-> B -> C) where timer:within ]. The results were:




Again - green the results match what I expected. The red, I got something other than what I had wanted and now, yellow, means the trigger did not fire. Yellow means the updateListener method was not called.


The Yellow results on Test Case 5 puzzle me.  Yes, the time delta between A1 and C1 is outside the window. No trigger should fire.  But A3, B3 and all of the C events are within the two minute window and best I can tell, the updateListener did not get called.  That strikes me as odd.


Test Case 3

Recall EPL-3 puts the "every" keyword back but inside the parenthesis: [  (  every A-> B -> C) where timer:within ]The results were:


Interesting,  More cases where there was no trigger fired.  Again not what I was expecting.

Test Case 4

Finally EPL-4 removes the parenthesis: [  every A-> B -> C where timer:within ]The results were:






Conclusions

First I need to read more and get a better understanding of the EPL pattern syntax to see if my errors are obvious.  

The Community Responds!  Switch your EPL to use The Match Recognize Syntax!