Monday, February 17, 2020

(#34) What's the Fun in That? (Part Two)  

Keeping the Water Pipes in the RV Warm  


With Over-complicated Electrical timer in hand, it is time to do some testing!

Our supplies include: a six foot section of 3/4" PEX Pipe from Home Depot, 1" Pipe Wrap from Jax Ranch and Home Store. Two inexpensive temperature sensors from Aliexpress. And water from the tap.

And of, course, the Roof Heat Cable from Lowes.

The heat cable is not wrapped around the PEX pipe, it's simply laid down lengthwise to the pipe and taped (about every 12") to the outside of the PEX.  

One temperature sensors is taped next to the heat cable.

The pipe is filled with water.
 


The PEX is wrapped with pipe insulation and the second temperature sensor placed about halfway down the inside of the pipe.


And the whole assembly placed outside on a winter's day - ambient temperature was mid 20's (°Fahrenheit).



Using the Raspberry Based Pi Electrical Relay system built in Part One, the heat cable is then energized and temperatures measured periodically.


 

Power was applied for 30 minutes and then stopped. It was 30 minutes on, 30 minutes off, then 30 minutes on and 30 off -- for three hours.

Testing shows that the cable starts warming immediately.  And within a minute or two the water is warming up inside the PEX.

Amperage measurements shows that the cable immediately consumed 300 watts and varied very little as long as the power was applied.

I repeated this test 7 times, there was little variation.


I also ran one test with the pipe empty -- no water -- to see just how hot things could possibly get.  Both temperature sensors reached 150°F quickly and stayed there for 90 minutes. Hot but still below the 180-200°F PEX limit.

Conclusion

My conclusions is: the 300 watt Heat Cable could be a solution for keeping the exposed PEX pipes in the motorhome from freezing.

Energizing the heat cable on for 30 minutes then off for 30, keeps the water inside the pipe well above freezing without getting the pipe too hot.

The upward trend of both heating plots show that an optimization would be to lengthen the time the cable is off.

Next up the software that makes this thing work!
 

Monday, February 10, 2020

(#33) What's the Fun in That?  

It's Cold in the RV!  

We've got a problem.  When we camp in the motorhome and temperatures drop well below freezing, the water lines freeze. Some astute readers are immediately offering solutions: stay home, camp in warmer climes, sell the motorhome.

But what's the fun in that? Those are easy, simple, cost-effective and highly reliable solutions. And no fun.

The manufacturer of our motorhome clearly does not want their product used when temperatures dip below freezing. The waterlines are exposed underneath and essential (nee expensive) parts are also exposed for maximum damage potential.

Foolishly, I've crawled underneath and wrapped 99% of the exposed pipes with pipe insulation. At 14° Fahrenheit, I found out that the one percent missed was significant. We woke up to no water flow.

My fellow northern-hemisphere-equally-foolish-motorhome-owning readers will proffer a reliable, cost-effective solution at this point.  Carry ten bucks of RV antifreeze and spend fifteen minutes doing a quick 'winterize' step before going to bed.  You're right. That would work and be cheap.

But what's the fun in that?

Nope - let's see if we can come up with a way to over-engineer a solution here.  Let's take a cue from the Heated Water (inlet) hoses you can buy and see if we can keep the lines from freezing by keeping them warm.



Heat Tape and Heat Cable

After spending perhaps eight hours, in aggregate, reading blogs posts and manufacturer product sheets, I've learned there are at least a dozen variations on 'heat tape'.  Heat tape is an electrically charged wire that you wrap around the exposed pipe and plug into an electrical source. The heat tape, um, heats. It gets hot. The heat keeps the pipe from freezing.

To cut to the chase, there are a handful of challenges with using heat tape. First is that no manufacturer recommends their product for the use I'm aiming at. Nope. The motorhome pipes are PEX, not copper. The tape gets too hot.  Second is the expense. Third is the power consumption.  Those are just the top three.

There's a similar product that, the best I can tell, is called Roof Heat Cable. It's similar in function but is
apparently aimed at homeowners whose roofs are susceptible to ice dams.  [ At this point, if you're a native Texan, Floridian -- e.g. someone who knows not of ice dams, feel free to Google. ]

I had some heat cable. About 60'. Unused since I added ice-dam prevention sheathing when we re-roofed the house.


300 Watts All of the Time

One cold December day, I grabbed the Heat Cable and my Kill-A-Watt meter and plugged it all in. The Heat Cable I bought consumes 5 watts per foot. My 60 foot cable consumed 300 watts. The Kill-A-Watt meter confirmed it.

Some more expensive products can be thermostatically controlled. Not mine. When plugged in, it's on all of the time.

I filled a bucket with water, coiled up the cable and immersed the coiled cable in the water. After about 10 minutes, the water was 140° F.  That's a bit warmer than I'd like. PEX can handle it, but I'd prefer something cooler under there.

Jumping ahead, my idea was to pulse the electricity to the heat cable.  To come up with a way to turn it on for -- oh say-- 10 minutes, then off for 20 minutes. Then back on, then back off.  Lather-rinse-repeat all night.

I could probably find an inexpensive reliable timer to do this. But, what's the fun in that?
No there has to be a way to make this much more complicated, more expensive and less reliable!

[ January 2024 Update: this winter camping trip, I did use an inexpensive timer. I set it for 30 minutes on, 60 minutes off. Worked great! ]




Another Day Another Pi

Yes, you guessed it. My solution is to take a Raspberry Pi, a relay board, some wire and some sockets and craft a complicated solution.

I chose an 8 relay board, when two would have been sufficient.  I won't cover the specifics of wiring the relay board to the Raspberry Pi as there are dozens of posts that do.


Raspberry Pi and 8 Channel Relay Board

I removed the common tab from each of the four receptacles so I could control all eight sockets independently.


Removing Tab That Ties Two Together
Next is time spent with wire cutters, strippers and a couple of screwdrivers wiring it all up.




The components were secured to a board and then mounted in an inexpensive plastic tool tote from Walmart.  Main line was connected, proper wiring tested.




At this point

I've successfully created an 8 port 120V Glorified Timer. The relays are rated at 10A but because of the wire gauge I used, I will limit the current to each socket to 5A, 500 watts. And the total current will not exceed 1000 watts.

Now, On to part two - real world testing!

Monday, March 4, 2019

(#32) InfluxDB - Scratching the Surface


The last post outlined my first steps into using InfluxDB, a Time Series Database to store my IoT data.  Persistence may not be futile after all. 

Here’s what the vendor says:

InfluxData’s […] InfluxDB [will] handle massive amounts of time-stamped information. This time series database provides support for your metrics analysis needs, from DevOps Monitoring, IoT Sensor data, and Real-Time Analytics.

Jumping right in, here’s the list of my Current Likes for InfluxDB
  •     Schema-less
  •     Self Cleaning
  •     Simple


Schemaless

By schemaless I mean that InfluxDB is a schemaless database which means it’s easy to add new measurements, tags, and fields at any time.   There’s no CREATE TABLE prerequisite and no ALTER TABLE step if things need to change.

Just start writing your data.

Now, there are some key concepts to learn when using InfluxDB: measurements, tags and fields. But the value in being schemaless is that you can just get started. 

Just start ingesting and storing data. 
Then refine and refactor as you learn.  
Just do it!


Self-Cleaning

How often have your relational databases accumulated crud, unknowingly? How often have your DBA or Sys Admin let you know that a table is getting big, a partition is full? How many times have you spun up a project to Archive Data?  Any answer larger than “zero” is too many.



InfluxDB comes with the concept of a Retention Policy – it’ll expire (delete) old data without intervention.


That’s another reason why stashing IoT data in a conventional database is dumb.  IoT data is inherently ephemeral.  No one cares what the outside temperature was at 9:43a, three Thursdays ago! You did care! But now you don’t.  Sensor data is evanescent!

A Retention Policy lets you tell the database how long to hang onto data.  Here’s my Retention Policy for my IoT data:

Query   query = new Query( "CREATE DATABASE SolarData WITH DURATION 30d” );

That’s it; my low-level, inherently transient data evaporates as it ages over 30 days.
See here for everything that InfliuxDB can do with retention policies.

Now, you will care about trends in the data, minimums, maximums. And other statistics.  To that end, InfluxDB also gives you Continuous Queries.  We’ll cover Continuous Queries later.


Simple

Let’s cover off on Simple by reviewing the code I created to pull machine data from the
MQTT broker and put it into InfluxDB using the Java client.

Connect to the InfluxDB Server

One call:

InfluxDB influxDB = InfluxDBFactory.connect( databaseURL, "notused", "notused" );
Pong response = influxDB.ping();
if (response.getVersion().equalsIgnoreCase( "unknown" )) {
   log.error( "Error pinging the InfluxDB server. Make sure it’s up and running!" );
   return;
} else {
   log.info( "Connected to InfluxDB Server. Version: " + response.getVersion() );
}   

Create the database and retention policy.

Two calls. The great news is that if the database already exists, this step is harmless!

//
//  Create the database, if it already exists, no harm is done
//  Syntax: CREATE DATABASE <database_name> [WITH [DURATION <duration>]
//   [REPLICATION <n>] [SHARD DURATION <duration>] [NAME <retention-policy-name>]]
Query   q = new Query( "CREATE DATABASE " + databaseName + " WITH DURATION 30d", databaseName );
influxDB.query( query );



Now we get into the specifics of my application.

Everything is an Event

In my application, events will come in as JSON formatted messages from the MQTT broker.

I have a Java POJO for each Event type. As a JSON message arrives, it’s inflated into the right Java POJO using Google’s Gson framework.


Here’s a snippet for the Alarm Message POJO that backs up the Alarm Event

public class AlarmMessage_POJO extends EventMessage_POJO
{

    private static final org.slf4j.Logger log =
                LoggerFactory.getLogger( AlarmMessage_POJO.class );

    /*
     * Here’s what the JSON payload will look like:
     *     {"alarmCode":16,"alarmMessage":"PC LOAD LETTER","topic":"CSEA/MID-02/ALARM/","machineID":"MID-02","dateTime":"2018-05-31T14:38:11.487","inBatchMode":"true"}
     *
     */

    int     alarmCode;
    String  alarmMessage;
   

    public static AlarmMessage_POJO fromJSON (String jsonString)
    {
        try {
            Gson gson = new Gson();
            return gson.fromJson( jsonString, AlarmMessage_POJO.class );

        } catch (Exception ex) {
            log.error( "Error parsing message into JSON.", ex );
            log.error( "Original JSON [" + jsonString + "]" );
            return null;
        }
    }

    < getters and setters follow … >



One more code snippet -– the Java POJO that represents the Machine’s Throughput in WidgetsPerMinute:

public class WPM_POJO extends EventMessage_POJO
{
    private static final org.slf4j.Logger log =
                 LoggerFactory.getLogger(WPM_POJO.class);

    /*
     * Here’s what the JSON payload will look like:
     * {"numEvents":9,"totalEvents":3894,"intervalSeconds":60,"eventsPerMinute":9.0,"topic":"CSEA/MID -01/WPM_EVENT/","machineID":"MID-01","dateTime":"2018-05-30T09:36:11.468"," inBatchMode ":"true"}
     */

    int    numEvents;
    int    totalEvents;
    int    intervalSeconds;
    float  eventsPerMinute;
   
    public static WPM_POJO fromJSON (String jsonString)
    {
        try {
            Gson gson = new Gson();
            return gson.fromJson( jsonString, WPM_POJO.class );
        } catch (Exception ex) {
            log.error("Error parsing message into JSON.", ex);
            log.error("Original JSON [" + jsonString + "]");
            return null;
        }
    }
     < getters and setters follow … >
            

Pretty simple. 
I'm doing one POJO per event type.


As each event is received, the JSON payload inflates the right object and it’s added to a BlockingQueue():

@Override
public void messageArrived (String topic, MqttMessage message) throws Exception
{
    byte[]  payload = message.getPayload();
    String  jsonMessage = new String( payload );
              
    EventMessage_POJO  anEvent = EventMessage_POJO.fromJSON( jsonMessage );
       
    if (anEvent.isAlarmMessage())
        anEvent = (AlarmMessage_POJO) AlarmMessage_POJO.fromJSON( jsonMessage );
    else if (anEvent.isWPMMessage())
        anEvent = (WPM_POJO) WPM_POJO.fromJSON( jsonMessage );
       

    if (this.eventQueue != null) {
        log.info( "Event received - putting (blocking call) on the queue" );
        this.eventQueue.put( anEvent );
    }
}

As this point, we have Machine Events represented as Java POJO objects sitting in a BlockingQueue(). 



Popping out to the other Java thread:

while (true) {
    //
    // Suck an event off the queue
    EventMessage_POJO anEvent = (EventMessage_POJO) eventQueue.take();
    log.info( "event message arrived. Machine: {}  Topic: {}", 
               anEvent.getMachineID(), anEvent.getTopic()  );
           
    //
    // Let’s try the Synchronous Write approach outlined in the README at
    // https://github.com/influxdata/influxdb-java
    Point   dataPoint = null;
    BatchPoints batchPoints = BatchPoints
.database( databaseName )
                        .tag( "async", "true" )
                        .consistency( ConsistencyLevel.ALL )
                        .build();       

    if (anEvent.isAlarmMessage()) {
        log.info( "Alarm event arrived - building InfluxDB Point object" );
        dataPoint = Point.measurement( "alarms" )
                         .time( System.currentTimeMillis(), TimeUnit.MILLISECONDS )
                         .tag( "machineID", anEvent.machineID )
                         .addField( "CODE",
                                    ((AlarmMessage_POJO) anEvent).getAlarmCode() )
                        .build();



    } else if (anEvent.isWPMMessage()) {
        log.info( "WPM event arrived - building InfluxDB Point object" );
        dataPoint = Point.measurement( "throughput" )
                         .time( System.currentTimeMillis(), TimeUnit.MILLISECONDS )
                         .tag( "machineID", anEvent.getMachineID() )
                         .addField( "WPM", 
                               ((WPM_POJO) anEvent).getEventsPerMinute() )
                         .build();
               
    }  

    //
    // Quick error check and if good, write data point to InfluxDB
    if (dataPoint != null) {
        batchPoints.point( dataPoint );
        influxDB.write( batchPoints );
        log.info(  "Point written to InfluxDB successfully" );
    } else {
        log.error( "Event received but builder failed to build! dataPoint null!" );
    }   
}

That’s pretty simple and will get you up and running quickly.

We do need to cover off on measurements, fields and tags. Three essential concepts in InfluxDB.

That's the next post!




Saturday, March 2, 2019

(#31) Persistence is Futile!


We are the IoT Sensors. Lower your shields and surrender your ships. Persistence is futile.


At least that’s what I think about trying to store IoT data in a database. Or, it’s what I thought!


I believed that the data volume and velocity of a large IoT ecosystem would swamp any data store. Persistence is Futile! Moreso if the target was a relational database.


I have other beefs about relational databases, mostly along their semblance to a Land Line Telephone in the 21st century.

But particularly in an IoT world, a world awash in sensor data.

Because a universal attribute of sensor data is time.

  • “What is the temperature now?”
  • “What was it yesterday?”
  • “What will it be this afternoon?”
While any database can store and manipulate time data, I’ll argue that few treat time as a first-class citizen.


And then I met InfluxDB.

InfluxDB is a product in the emerging field of Time Series Database (TSBD) technology.
InfluxDB is a data store where time is treated as a foundational element. Time is more than just a datatype. A Time Series Database (TSDB) “…is a software system that is optimized for handling time series data, arrays of numbers indexed by time (a datetime or a datetime range)” (TSDB Wikipedia)
Asking time-based questions, getting time-based answers from InfluxDB is easy. It’s what a TSDB does.

Let’s get to the meat and cover how I’ve put an InfluxDB into production in two scenarios. First, I have to call out the ease of installation. I am stunned, amazed and grateful (no Oxford comma) at the installation process!  

On my Ubuntu 18.04 LTS based system, the installation was trivial. No, that’s an overstatement. It was easier than trivial. It was ‘easier than falling off a log.’ A half dozen commands, ending with:
$ sudo apt-get install influxdb
$ sudo systemctl unmask influxdb.service
$ sudo systemctl start influxdb.service
And InfluxDB is up and running.
Waiting to ingest some data.
Boom!


InfluxDB is one of four components bundled in what the vendor calls their TICK stack. The other components I installed (with equal ease) were Telegraf and Chronograf. I skipped the installation of Kapacitor.
  • InfluxDB is the persistence layer for the timeseries data.
  • Telegraf is their tool for putting data into InfluxDB.
  • Chronograf is their visualization tool.

Let's start with Chronograf since that’s Eye Candy. 

Two commands bring Chronograf online:
$ wget https://dl.influxdata.com/chronograf/releases/chronograf.deb
$ sudo dpkg -i chrongraf.deb

Then hop over to http://localhost:8888  (replace localhost with the server address where Chronograf is running.)

Solar Array Dashboard

Here’s my first example of a Chronograf Dashboard. This is a screenshot of data coming from my Solar Charging System setup.

These are real-time charts (i.e. the charts change as the data changes) that are showing a handful of key data items important to a small solar charging system.

I’m far from being an expert Chronograf user, I’m far from being an intermediate user. I’ve spent about 90 minutes with the tool over the last several months.
This dashboard took me 15 minutes to create!


Let’s move down to the next layer in the TICK – ingestion of data. Telegraf is their tool that eases the chore of sending IoT data to InfluxDB.
Installation of Telegraf was, again, stunningly simple:
$ sudo apt-get install telegraf
$ sudo systemctl start telegraf
$ telegraf config > telegraf.conf


The last command creates a default configuration file with, what I think are all ingestion methods disabled. All I had to do was bring that file into an editor and change a few lines:
[[inputs.mqtt_consumer]]
   servers = ["tcp://192.168.2.1:1883"]
   topics = [
      "LS1024B/1/DATA",
      "sensors/#",
   ]
   data_format = "json"


My whole home IoT ecosystem is already based on JSON and MQTT. All of my devices in the house send their data, formatted as JSON payloads to an MQTT broker.
Thus, for me, four steps. Nothing more.
My Solar Panel System devices are flowing their data into InfluxDB!
Total elapsed time?
Maybe 30 minutes.


Crank it up a notch



Now, let’s switch to a slightly more complicated InfluxDB setup that I put into place at work.
For context, at work, we have a few packaging machines that we’ve tapped into for their event stream. These packaging machines spew events, as they package packages. The machine events are formatted as JSON messages and routed to an MQTT broker.
I’ve written other applications in Python and Java to subscribe to the events and “do things” (make business related ‘conclusions’). “Java Application 1” uses Esper, a Complex Event Processing engine to create business-relevant conclusions based on patterns in the event stream.
Skipping the gory details, “Java Application 1” looks for certain patterns that indicate something important has occurred. When a pattern is recognized, the Esper based Java program publishes a new event to the MQTT broker.

The points are:
  • Everything is an event
  • All events are represented as JSON messages and
  • All are flowing to an MQTT broker
“Java Application 2” subscribes to these events are puts them into InfluxDB using the Java client. Why not just use Telegraf again? I wanted hands-on experience with the Java client so I could compare the two.
Recap:
  • The packaging machines publish low-level, JSON formatted, events to MQTT
  • Java App 1 uses Esper to detect patterns in those events and publish new events, also in JSON to MQTT
  • Java App 2 subscribes to all of these events and uses the InfluxDB Java client to store the data


Eye candy for openers:
Packaging Machines Status Dashboard

Again, this is a real-time dashboard that instantly conveys the status and business value of the machines. Throughput, machine issues, and concerns that need to be attended to.
Admittedly I’ve not spent a lot of time with other visualization tools (Tableau, PowerBI, etc…) the ease of creating the charts was surprising. I spent about 40 minutes to create that dashboard.
In subsequent posts I’ll go into more detail, show code, show the JSON and configurations.

There's more to InfluxDB than just eye candy!