Authoreterps

The Smart(er) Cat Feeder – Tying it Together With Code

Now that I have a Raspberry Pi that can take pictures and turn electrical sockets on and off, as well as a trained image classifier that knows my cats, it was time to stitch everything together.

The Lola Detector

The following steps were taken to construct a small program that would scan an image on demand and identify Lola (or not):

  1. Install TensorFlow on the Raspberry Pi as a Python library.
  2. Refactor the example image label Python script to start TensorFlow, load the model, and wait.
  3. Use Flask to create a URL that would kick off the image analysis function, and return a JSON object with the Lola/Maddie label probabilities.
  4. Keep this program running.

The code for the Lola Detector can be found in a Gist.

Please note, this is the first Python script I’ve ever written. Feedback (maybe) appreciated.

The Lola Feeder

Rather than use Python for the whole app (like a sane person), I opted for Node to do the rest of the stuff. A big, convoluted Node script does the following:

  1. Takes a picture with raspistill and saves it as rpicam.jpg.
  2. Sends an http request to the Lola Detector service, and waits for a response.
  3. Checks the response for a high Lola probability.
  4. If Lola is NOT found, go back to step 1.
  5. If Lola is found, send a signal via the FM transmitter to power on the cat feeder.
  6. Send a tweet with the recently taken picture to a hidden Twitter account.
  7. Send a message to IFTTT which sends me a push notification.
  8. Wait 60 seconds and send another signal to power off the feeder.
  9. Wait 90 minutes and go back to step 1.

Additionally, I have a small lamp attached to another RF power socket, and have a “cron job” that turns the lamp on for a few hours in the evening, and a few hours in the early morning.

Here’s the code for the RF thing. I also have a white noise machine, another lamp, and some Christmas lights hooked up to other RF outlets. Unfortunately, the Xmas lights outlet stopped working :(

To run and monitor both scripts, I use PM2. It’s awesome. You should use it, too.

The Smart(er) Cat Feeder – Training TensorFlow

How does a computer know the difference between two cats? An image classifier created from a trained neural network, of course! Seriously though, I’ve always figured the realm of machine learning is out of my reach, but some of the tools are becoming more accessible to lay-people. This video was quite reassuring:

Learning About Machine Learning

With a confident attitude I went out searching for some ideas on how to create my own classifier. Luckily I didn’t have to look far. Google’s most basic introductory tutorial happened to fit my problem like a glove. takes you from nothing to a custom image classifier in less than an hour. It also leaves you with all the tools you need to keep using it with your own images… which is what I did.

Cat Training

The basics of creating the classifier went like this…

I needed pictures of my cats. Lots of them. Like hundreds. So I fired up the picam on the Raspberry Pi to record for a couple minutes while ‘nudging’ each cat in front of the camera in various positions and with different lighting. For this I used the raspivid program that comes with Raspbian. The following command records ten seconds of video:

<

pre>raspivid -o video.h264 -t 10000

For image extraction, I used ffmpeg and the following command to output an image for every second of video:

ffmpeg -i video.h264 -vf fps=1 out%d.png

The ffmpeg docs have lots of useful snippets. Oh, and ffmpeg can be installed as a command line utility via Homebrew on Mac (brew install ffmpeg).

 

The images were all in PNG format, and I needed .jpeg for Tensorflow. Luckily, OS X has a built in batch conversion tool right in the Preview app. See OSXDaily for details. After the conversion, the images were sorted into two folders – one for Lola, the other for Maddie.

For the training, I simply repeated in the Tensorflow for Poets tutorial, but used ‘lola’ and ‘maddie’ pictures instead of roses and daisies. I used 4000 iterations for extra thorough training. The basic command is below. Of course, instead of flower photos, I obviously had cat photos.

# In Docker
python tensorflow/examples/image_retraining/retrain.py \
--bottleneck_dir=/tf_files/bottlenecks \
--model_dir=/tf_files/inception \
--output_graph=/tf_files/retrained_graph.pb \
--output_labels=/tf_files/retrained_labels.txt \
--image_dir /tf_files/flower_photos

The main output of this training is the model file (retrained_graph.pb), and the accompanying labels file (retrained_labels.txt). These are the files used in the program that will examine new pictures of my cats and determine which one is which (if any cats are present at all).

Finally, I could take the label_image.py script from Step 5 of the tutorial, and use that to analyze new photos of my cats and tell me the confidence of Lola or Maddie being present in the image. All this runs from inside the original Docker image like so:

curl -L https://goo.gl/tx3dqg > $HOME/tf_files/label_image.py
docker run -it -v $HOME/tf_files:/tf_files  gcr.io/tensorflow/tensorflow:latest-devel 
python /tf_files/label_image.py /tf_files/new_lola_pic.jpg

Notice in the last line I used a picture of lola instead of a daisy. The output was something like:

lola (score = 0.99138)
maddie (score = 0.35342)

Even though Maddie was not in the picture, the confidence was pretty high. I’m guessing because Maddie is a nearly solid black cat, any dark mass in the photo may appear to the model as Maddie. Luckily, if Maddie actually is in the photo, the confidence is much higher. Here’s a score with Maddie, but no Lola:

lola (score = 0.01483)
maddie (score = 0.99683)

Good Job, computer!

The Smart(er) Cat Feeder – The Raspberry Pi

This is part 2 of a series describing modifications to an automatic cat feeder used to selectively feed cats. The overview can be found here:
The Smart(er) Cat Feeder Starring TensorFlow and Raspberry Pi

The Raspberry Pi 3

image curtesy raspberrypi.org

image curtesy raspberrypi.org

A Raspberry Pi 3 has just enough horsepower to run pre-trained TensorFlow apps, and a bunch of code for getting up and running on the Pi. There’s also a burgeoning IoT and hacking community, which makes is a great hub for controlling internet connected stuff.

A basic Pi setup needs a bit of hardware:

  • A SD Card. I recommend 32GB – 64GB to have plenty of room for TensorFlow models, pictures, and video.
  • . There’s no power cord, so I got a 10ft. micro USB cable and 5V adapter.
  • A RPi case. Plenty of these out there. I got a cheap plastic one for a couple bucks on eBay.

And the basic software setup:

  • Raspbian & Pixel Desktop. I went with the new desktop GUI rather than headless to try out Pixel, and to do development and testing right on the Pi itself without another computer.
  • Add a hidden network to the Wifi list:
    sudo iwlist wlan0 scan essid [yourSSID]
  • Less cruft. Raspbian comes with too much stuff. Easy to remove, though.
  • Node.js 7
  • VNC. In case I did want to log in remotely, I had to fiddle with the video settings in
    /boot/config.txt

    to get a decent sized window. Using hdmi_group=2 and hdmi_mode=27 did the trick.

The Raspberry Pi Camera

Pi Camera

I read a few places that the version 1 (5MP) camera module is better with auto focus than the v2 (8MP) module, so I got a v1 from eBay for about $15. I’m not dissappointed. I ended up using 600×600 pictures in the project anyway, so extra megapixels would have been wasted, and all the features (focus, white balance, filters, rotation, etc…) work great.

There are some clones of the camera module from China for a few dollars less, but why risk it? I did, however, order a protective camera covering from China for $1. I’m still waiting on it as of this writing, though.

Remote Control Sockets and RF Transmitter

outlets

Here’s where the real fun begins. To turn the feeder (and accompanying lamp) on and off, I used remote controlled outlets, but spoofed the remote control frequencies with the Rapsberry Pi and a radio frequency (RF) transmitter attachment. In order to get the RF codes to activate the outlets, a RF receiver attachment is necessary as well. Luckily, the receiver and transmitter are sold in pairs, and are really cheap.

I followed a couple great guides for inspiration and to set up the proper tooling;

The guides above go a few extra steps and set up a web server with PHP, but I skipped that in favor of using Node (more on that in the next article). The basic steps for the setup are as follows:

  1. Install WiringPi and RFSniffer
  2. Plug in the reciever to the Raspberry Pi. Wiring Diagram
  3. Start RFSniffer.
  4. Push all the buttons on the remote control, and write down all the codes. It will look something like
    Received 21811
    Received pulse 192
    Received 21813
    Received pulse 192
    
  5. Plug in the transmitter to the Raspberry Pi.
  6. Test the codes with codesend. Be sure to substitute in your own code and pulse values read from RFSniffer.
    ./codesend 21811 -l 192 -p 0
    

When buying outlets, the popular choice is . Amazon was sold out of these when I was looking, so I got . One of the outlets broke a day later, so I’d recommend ETekCity.

Also, the first RF Transmitter I got was busted. I noticed that the circuitboard was slightly different than those pictured in the guides I used. Instead of ‘data’ printed on the transmitter, it said ‘ADAT’. I returned the set, and ordered a receiver/transmitter with ‘DATA’ and everything worked fine.

BAD! Avoid this one!

BAD! Avoid this one!

GOOD! Buy this kind!

GOOD! Buy this kind!

Good to Go!

With the Raspberry Pi set up with a working camera and the ability to turn electronic devices on and off, the stage was set. The rest was just a simple matter of programming…

The Smart(er) Machine-Learned IoT Cat Feeder for Lola the Cat – Overview

tl;dr – I modified my cat feeder to only feed one of my two cats using a Raspberry Pi webcam and image analysis with TensorFlow.

Meet Lola

Lola Resting

My cat for seven years who is healthy and fuzzy and an all-around good cat. However, she is a very picky eater. She takes small bites of food sporadically throughout the day from a food dish just for her on top of a tall dresser.

Meet Madison (Maddie for short)

Hi Maddie

Obese cat that was recently put on a strict diet of canned food twice a day (no carbs!). Due to her obesity, she’s been unable to leap onto the dresser to eat Lola’s food. Until now. Perhaps she’s regaining some athletic ability with the new diet, or her appetite has overwhelmed her fear, but a couple weeks ago she finally made the leap to the bounty of Lola’s food dish. Not good.

The Problem

Lola likes her dry food (a mix of Science Diet and Orijen), and only eats a tiny bit at a time whenever she wants. I’ve made several attempts at getting her on a schedule, but this results in incessant yowling at all hours along with a stubborn refusal to eat during the allotted mealtime. All was well when Lola was the only cat with the leaping ability to reach the skyward food dish. Now there’s no place for cat food to hide that Maddie won’t find.

I needed a way to keep Maddie on her diet, but allow Lola to peck at her food when she decides it’s mealtime. If tiny portions of food could be presented to Lola, she would eat all of it leaving none for Maddie. But how to get Lola (and not Maddie) tiny portions of food when I’m away at work or sleeping?

The Feeder

CSF Super Feeder

I bought an automated cat feeder a few years ago to dispense food to the cats whilst on vacation. It’s sturdy, but very primitive compared to today’s IoT designs. It works by plugging into a wall timer, and dispensing food for a short duration when AC current is supplied, and then resets when the power is cut. The amount of food dispensed is determined by twisting a small potentiometer with a tiny screwdriver (seriously!).

If the Feeder Only Had A Brain

I have a cursory fascination with machine learning, and sometimes read about what’s what in the ML world. TensorFlow has been a hot topic of late, and they have a few dead simple tutorials for lay people like myself. shows how to retrain an image labelling model to recognize objects of your choosing… like perhaps your own pets.

I thought maybe, just maybe, I could fiddle with the TensorFlow tutorial’s code and cobble together a system that allows the feeder to ‘recognize’ Lola and dispense a tiny meal for her, but refuse service to Maddie. Given the simple nature of the feeder (plug in = food, unplug = reset), some sort of smart-plug for the feeder would work just fine.

The Bright Idea

Grinch Smirk

I have a couple Raspberry Pis lying around (what self-respecting tech nerd doesn’t?), and decided to use one for the guts of the operation. The full solution looked something like this…

Set up the RPi + camera to watch for cats mulling about near the feeder. Periodically snap a photo, and analyze said photo with TensorFlow to find Lola. If Lola is identified in the image, flip the switch on a smart plug or relay connected to feeder with the RPi. Then shut off the power to the feeder and sleep for an hour or two before allowing another feeding. We don’t want it constantly spitting out food while Lola is eating.

The Working Contraption

feeder1

After a few days of tinkering, trial and error, ordering bits and bobs from the internet, and convincing my wife that I’m not crazy, I finally got a working version! As I suspected from the outset, it’s very much cobbled together and has a few quirks, but it does what I set out for it to do – feed one cat, but not the other.

(Quick reminder, I do still feed Maddie, but only twice a day with incredibly expensive cat food that I swear is better than what I eat some days)

The parts list for the working prototype are as follows…

  • The SuperFeeder brand cat feeder
  • A Raspberry Pi 3
  • 32GB MicroSD card for RPi (with latest Raspbian)
  • 5v power adapter for RPi
  • (5mp)
  • ( ETekCity )
  • Thin copper wire (for an antenna)
  • A box and/or stand for the RPi

The software involved includes…

  • Raspbian OS
  • TensorFlow
  • Python (and Flask)
  • Node.js
  • ffmpeg
  • IFTTT/Twitter/Twilio account (optional)

The “How it Works”

The RPi is set up near the feeder in a small box. The camera is pointing to the area in front of the feeder. After turning on the feeder, a python script bootstraps TensorFlow which loads the trained model into memory and gets ready to analyze an image. It waits until it is triggered by a GET request set up with Flask. So TensorFlow twiddles its thumbs, waiting patiently for some work to do.

Concurrently, A Node script fires up to do the ‘detection and feeder’ cycle. The script goes a little something like this…

A picture is taken with the camera. After the photo is saved to disk, the Node app calls the python service which analyzes the image and returns some JSON data. The data includes the probabilities of Lola and Maddie being in the image. If the “Lola probability” is greater than 99.3%, the Node script sends an RF signal to turn on the outlet with the connected feeder plug. The feeder gets power and dispenses a tiny bit of food. After a few seconds, the Node script sends an RF signal to turn off the feeder, then sends a message to IFTTT to alert me that Lola got food. It also copies the last image taken into another folder and timestamps the filename. The cycle then times out for 90 minutes, and then resumes taking pictures.

It all works surprisingly well, except at night when the camera sees only blackness. Luckily, the RF controlled outlets come in packs of five, so I hooked up a lamp to another one and have the Node script also turn the light on for a couple hours in the evening, and early in the morning (Lola really likes to eat at 5:30am, and she lets us know it).

The IFTTT notification and copied image are nice-to-haves that let me keep an eye on things until I get a better interface put together. It also lets me check out any false positives that might occur, and give me better ideas for retraining the TensorFlow model.

On the TODO List

  • Definitely a sturdier box and camera mount. Lola has already threatened to knock the whole thing off the desk (as cats do), and barely a tap will knock the camera out of place.
  • A PIR motion detector to trigger the camera instead of running the camera non-stop. Not sure how useful it would be, but I have a PIR sensor, so why not?
  • A database to record data. Collect feeding times and pictures to get a better idea of Lola’s eating habits.
  • A web interface. It would be nice to adjust the timeout between feedings, manually trigger a feeding, control the lamp, and generally see what’s going on via the web.
  • A Maddie scarecrow. Sometimes Lola does’t quite eat all the food, and Maddie jumps in behind her to finish the job. Might be worthwhile to play a loud noise or something if Maddie is detected (but not Lola) soon after a feeding occurs.
  • 100% Python. I used Node because I know Node, but there’s no reason this couldn’t be one Python script instead of two separate programs. Someday I’ll fulfill my dream of learning Python, and may refactor this project accordingly.

Stay Tuned

More posts will follow on the technical details of the implementation. TTFN!

Meteor on Windows – My Experience (So Far…)

I’ve had several conversations online and off with people using Windows, and the consensus is that the (vocal) Meteor-on-Windows community is small, and everyone has their own particular environment.  Many common issues arise, though, so I want to share some of my experiences should anyone else in Windowsland come across them.  It’s a little haphazard, and there might be better ways of doing things, but here it is.  Please comment with any tips or advice.

Meteor Development on Windows 7

I use Windows 7 at my job. I’ve never used Windows 8 or 10, so I don’t know how well they will work, but they should.  The official Windows installer from Meteor has worked just fine for me.

MongoDB

The only thing I change about the default Meteor setup is installing a persistent version of MongoDB (as a Windows service) so I can browse, dump, and restore data when Meteor is not running. To do this yourself, create an environment variable called MONGO_URL and set it to the URL of your running mongo instance.

I’m currently running Mongo 3.0.8 locally, and have not yet tried 3.2.

JetBrains

For my IDE, I use JetBrains products (WebStorm and IntelliJ IDEA).  The Meteor support is top-notch.  Project setup is a breeze, code highlighting is great, clicking on template items will bring you to your helper functions, etc…  The main benefit, though, is the debugger.  You can live-debug, trace, and place breakpoints on both client and server code – all in one handy interface. Webstorm is super cheap, and 100% worth it.

Building a Meteor App on Windows 7 (64-bit)

Running

meteor build

is where the pain begins. There are a few things that need to be done before successfully extracting a working bundle from Meteor in Windows.

An when I refer to meteor build, or bundle, I am referring to the output of the command:

meteor build --directory c:\mymeteorapp

More Environment Vars

You need to set ROOT_URL to http://localhost (or whatever you want) and PORT to a valid port number if you want something other than 3000. I use port 3030 so the bundled app doesn’t conflict with the development version running on port 3000.

If you are using Meteor settings, the entire json block needs to be stringified and pasted into the METEOR_SETTINGS environment variable.

The following Powershell 3.0 snippet will do this for you (assuming you run these commands from a directory with a file named settings.json):

$settings = Get-Content settings.json -Raw
$settings = $settings -replace "`n","" -replace "`r","" -replace " ",""
[Environment]::SetEnvironmentVariable("METEOR_SETTINGS", $settings, "Machine")

Please note, I am no Powershell expert (or even a novice).  Use at your own risk.

Build Dependencies

Visual Studio.  In order to build the fibers module (and possibly other node extensions), you’ll need some build tools.  I currently have Visual Studio 2013 AND 2015 (free Express/Community editions) installed locally.  I’m pretty sure you only need 2015, but remember to check the c++ build tools options when installing.  You don’t actually have to open Visual Studio, just have it installed.

Python. Also, Python 2.7 is a requirement. So install Python 2.7.

Node.js. You probably have NodeJS installed, but Meteor expects 0.10 (32-bit).  I’m using 0.12 (64-bit), which has gone OK so far aside from the problems listed below.  I have no idea about 4.2, or 5+.  If you want to keep your precious bleeding-edge versions of Node, I highly recommend using this: https://github.com/coreybutler/nvm-windows.

Situational Build Dependencies

NPM3. If you have any Meteor packages that include a very deeply nested set of node_modules dependencies, then you are going to run into the 256 character limit for file directory paths in Windows (still a thing after all these years!).  Go ahead and install npm3 (and make sure it’s install directory is included in your PATH), and we’ll use it later. To install npm3, use npm!

npm -g install npm@"3.3"

or check the GitHub wiki.

64-bit bcrypt.  If you are using meteor-accounts, then the bcrypt module is required. Because the Meteor dev environment uses a 32-bit version of node, a 32-bit bcrypt module is included with the build. To get a 64-bit version of bcrypt, you need to install Win64OpenSSL. Then go to an empty directory and install bcrypt with npm like:

npm install bcrypt --openssl-root="C:\some\other\path\to\openssl"

then dig around in the node_modules folder until you find bcrypt_lib.node. Hang on to it for later. This GitHub issue has some nice background info.

Meteor Build

Take a deep breath and run

meteor build --directory "C:\yourbuilddir"

 This will do a bunch of things, and hopefully spit out a node application in C:\yourbuilddir.  You’re not out of the woods yet.  Go into

\bundle\programs\server

and run

npm install

.  More things will happen.  Hopefully without error (and only a handful of warnings).

If you are using bcrypt, enter the build directory and find: bundle\programs\server\npm\npm-bcrypt\node_modules\bcrypt\build\Release\ and overwrite the bcrypt_lib.node file there with the one you installed earlier.

To run the app, go into the \bundle directory and run

node main.js

from CMD. If you don’t get any ugly errors, try navigating to http://localhost:3000 (or whatever is specified in your env vars) and see if your app works.

Fixing the 256 character file-path limitation

Your Meteor app should run in place, even if you have some insanely long file paths.  However, copying, deleting, and pushing to a remote repo will cause problems.  The basic fix for this is to go into the offending package, delete the node_modules folder, and rebuild it with NPM3 so it uses a flat folder structure.  There are two ways (that I’ve found) to fix this problem…

If you are developing alone on a single PC, you can simply go to your global packages directory (e.g. C:\Users\\AppData\Local.meteor\packages) and find the offending package with a long dependency chain.  Remove the node_modules folder, and run npm install to rebuild it with a flattened dependency tree.

This becomes problematic when updating modules or developing with a team.  Everyone will have to remember to do this every time the module is updated.  An alternative is to rebuild node_modules folders after meteor build has run.  Then it can be an automated part of your build script.

After running meteor build, go into c:\\bundle\programs\server\npm\\node_modules\\ and delete the node_modules folder.  Then run npm install from the same folder.  NPM3 should take over and install a flattened dependency tree.

Running the build

Go into C:\\bundle\ and run

node main.js

You should see a blinking cursor. This is good.  Open a browser and go to http://localhost:3030 (or whatever you specified in your environment vars) and hopefully see your app running.

Coming soon… Deploying this to Windows Server 2012 behind IIS. 

A Phaser 2.0 Game: Welcome Back Alex

welcom_back_alex

As Christmas 2014 approached I was faced with the yearly conundrum of what to gift to my various loved ones. I think I did an adequate job selecting items for most people, but I wanted to get my brother something good since he’s been out of the country for two years teaching English in Korea.

I saw this as a good opportunity to finally try out the Phaser framework that I’ve been following for quite some time, but never really used. Phaser is a game-making library written in JavaScript that provides a ton of great utilities for making 2D games that work on the web.

Phaser.io Given that Christmas was my deadline, and I started in mid-November, I had to be pretty modest in my expectations. Learning a new framework, and coding everything by hand (as opposed to using graphical tools like Construct 2 or Unity) further exacerbated the time constraints. Luckily, with the help of several online examples and some darn good official documentation, I was able to squeek out something the kind-of passes as a “game” and provided a few hearty minutes of entertainment on Christmas day.

The game itself follows my brother, Alex, on his journey home from Korea. In level one, Alex must protect the school-children from the invading Zerglings. In real life, Alex spent a few months bumming around the South Pacific backpacking through jungles and across beaches. So in level two, the objective is to cross the beach, avoiding obstacles and find the lovely lady. Finally, Alex reaches home (Wisconsin) and decides to take our parent’s new dog, Sophie, for a walk – in the middle of the street! Avoid the cars and win the game!

The source code is available on Github, and I can’t promise that it’s great. The modules are haphazardly written and there’s definitely some copy-pasta going on in a few places, but I tried to take as many queues from the official docs as possible in regards to best practices. Follow the links below to play the game, or view the code.

Play the game!

Grab the source code!

Nodevember 2014

In November of 2014 I was lucky enough to speak at the inaugural Nodevember conference in Nashville, TN. The conferenced focused on Javascript and Node.js development, and I gave a 45 minute presentation on Socket.IO. It was the first time I’d given a talk outside of Memphis, and the largest audience I’ve ever spoken to. I was pretty nervous, but I had plenty of time to prepare, and the conference was very well run and provided a comfortable speaking environment.

The conference organizers did an amazing job of pulling in attendees and speakers from the local community and around the coutry alike. I had a great time meeting new people and sharing ideas with such a diverse crowd of Javascript enthusiasts. I definitely will be attending next year.

The slides from my talk are online (above) and a video of the talk is available on YouTube (below). The first few minutes of the talk were cut off and some of the audio is missing. It’s unfortunate, as the intro to the talk is, in my opinion, the section I’m most proud of, but alas. Many more established conferences with much higher budgets don’t even bother to record all the talks, let alone post them online for free. Enjoy!

TechCamp Memphis 2014

TechCamp

A first-person view of my TechCamp 2014 experience. For a more comprehensive look at the day, check out the tag on Twitter and the official TechCamp website.

Breakfast

This was my first year as a TechCamp volunteer, so I arrived bright and early on a frigid (for Memphis) morning to help set up. Lucky for me, catored breakfast, and I got first dibs on the cinnamon rolls.

Key Takeaways:

  • Cinnamon Rolls are awesome
  • I still despise cold weather

The Keynote

started off the day with a touching dedication to the late Dave Barger – originator of Tech Camp and champion of the Memphis Tech community – and a moment of silence to . We all hope to continue raising the the tech tide with the spirit of Dave at our side.

Following Brian was a keynote address by Chago Santiago – former VP of IT at AutoZone. Unbeknownst to most of the crowd, Chago spent a good portion of his career as an Air Force officer at NORAD protecting us from Soviet aggressors, rubbing elbows with Margaret Thatcher, and observing unidentified flying objects from outer space (which he will neither confirm nor deny).

Mr. Santiago had some very compelling anecdotes from his career and always made a point to relate them to his key points of building a successful career in tech:

  • Always keep learning
  • Take advantage of opportunity
  • Don’t be afraid to take risks

A few surprises from his talk stuck with me. He made an important distinction between programming and software engineering in that programming without a process is similar to creating a work of art – it takes a lot of personal creativity, but the result is often highly personal and cannot be deconstructed and changed for the future. A work of art is static, whereas and engineered solution is part of a process that allows for constant change.

Also, he mentioned that Memphis is fertile ground for tech talent, and that he had no trouble hiring capable employees at AutoZone. He gave a clear message to ‘stay put’ if you are looing for opportunities in tech.

Unfortunately the morning time slot was only 45 minutes total, with 30 dedicated to Chago’s talk. It seemed that just as he was hitting his stride, it was time to go. Hopefully this won’t be the last we hear from him.

Key Takeaways:

  • Dave Barger is sorely missed.
  • Wildly successful careers in tech are possible in Memphis.
  • Attitude is just as important as ability in shaping a career in technology. Unpleasant challenges should be viewed as opportunities for change and innovation.
  • Programming is only a small part of software engineering.
  • 30 minutes is not adequate for a keynote address.

###Dev 101

My morning was spent in the Developer 101 track acting as a mentor and facilitator. The session was intended to allow attendees to complete a set of self-directed tutorials prepared by Gerald Lovel with the guidance of several mentors. The tutorials are all available on Geriald’s website at develop.aaltsys.info. Gerald and I were able to round up a set of linux machines to provide a clean slate for students to utilize for the lessons as well.

The actual attendance was pretty low, however, and at one point there were more mentors than students. It turned out OK, as the attendees that did show up all got one-on-one attention.

I spent the marjority of the time working speaking with a young fellow looking to get started in a career in web development. After running through one of the prepared lessons, we chatted a bit about self-learning, the job market, and poked around with a little bit of code.

Despite the low turnout to the Dev 101 session, I still believe there is a demand for tech/dev related tutorials, workshops and educational events. Hopefully the newly formed Memphis Technology Foundation can shine a light on local programs that already exist, and help kickstart new initiatives for providing tech training in Memphis.

Key Takeaways:

  • Promote your event in advance to get people to show up.
  • People eager to learn make good students.
  • Self directed learning is hard, but finding (affordable) formal tech education is also a challenge.

Lunch

Three years ago at TechCamp I went through the buffet line and faced a not-uncommon problem for introverts around the world – take the safe route and sit alone, or swallow my anxiety and sit with strangers. I don’t actually remember what I did then, but this year I had a completely opposite conundrum…

Over the past two years, I’ve attended user groups, social events, tech events, and even started a new meetup group. In other words, I dove head first into the #memtech community and was repaid in kind with a great group of friends and acquaintances.

So this year after I worked my way through the BBQ buffet line I was greeted with a room full of familiar faces, and my challenge was to decide which of these great folks I should sit with. It was nice.

Key Takeaways:

  • The Memphis Tech Community is welcoming and friendly.
  • I have a real problem saying no to a third helping of pulled pork.

Brad’s Marketing Talk

The first session I was able to attend was given by entitled, “My Startup Failed and It’s All Marketing’s Fault.” It was a poignant account of the rise and fall (mostly fall) of his startup company Work For Pie and a cautionary tale for new and existing entrepreneurs to pay more attention to marketing.

Tales of personal suffering are always captivating, but aside from that, Brad has a knack for public speaking and definitely continued his tradition of excellent presentations. It takes guts to admit failure, and a special kind of person to not only learn from his own mistakes, but help others learn from them as well.

Key Takeaways:

  • Engineering as Marketing is a thing. Widgets, APIs and micro-sites are effective marketing tools.
  • Many (most?) hugely successful startups achieved massive growth through strategic partnerships. And nobody talks about it.
  • Making mistakes and failing at stuff sucks. But you learn and move on.

Daniel’s Funhouse

The second session of my afternoon was Stupid Programming Tricks workshop. Daniel did his best to live code and demo some fun hardware/toys, but as anyone who has ever presented at a tech conference knows: live demos and code will never, ever work, no matter how many times it was successful at home.

Here’s the best part, though. As Daniel was presenting, members of the audience wouldn’t hesitate to troubleshoot and shout out suggestions. Not only that, a few people got up and jumped into the drivers seat to fix some Sphero code, or volunteer an API key, or help hook up some wires to apples and bananas, or present the latest greatest IRC bot script.

It was less of a presentation/workshop, and more like a bunch of pals hanging out in someone’s garage tinkering around with toys and code. I’m not sure if this as Daniel’s intent, but it was a lot of fun and I commend him for taking a risk on a more ‘interactive’ format for his session.

Key Takeaways:

  • Using fruit as a keyboard is generally a bad idea.
  • I need to spend less time working, and more time in IRC.
  • Twilio makes it way too easy to make prank calls.
  • I’m not the only one who has no idea what to do with Sphero.

Nathanael Talks About SVG

The third and final presentation I attended was given by one of my co-workers, . His presentation on using SVGs on the web was jam packed with useful information and practical tips on using vectors in a web project. And for someone relativley young and new to conference speaking, his delivery and content were pretty spot-on.

The only problem was hardly anyone was around to hear it :(

There were about three attendees in the audience that weren’t Nathanael’s co-workers. From what I gathered, most of the other sessions were just a sparsely populated. A good portion of the 80+ conference-goers for the day had decided they’d had enough, which was a shame. Lucklily Nathanael is a good sport, and we can probably provide a bigger, better audience at a Web Workers meetup sometime in the near future.

Key Takeaways:

  • You absolutely should be using SVG on your website.
  • Vector animation is neat, and no longer requires Flash.
  • We need to figure out a way to keep people from leaving TechCamp early.

The End

By the time the closing remarks rolled around, the crowd was mostly organizers, volunteers, and #memtech regulars. We congratulated ourselves on a job well done, and helped ourselves to leftover T-Shirts.

Overall I think the day was a success. A few hiccups are natural to an all-volunteer event. The mission of preserving the momentum started by Dave Barger and inducting a new generation of TechCamp organizers was accomplished. Until next year!

HACKmemphis 2014

hackmemphis

This year I attended HACKmemphis for the first time, and it did not dissappoint. Several of my coworkers were on the planning committe with leading the charge. They did an amazing job putting on a weekend event for the Memphis tech community.

The hackathon kicked off Friday night, and I had two goals in mind:

  1. Make some progress on a ‘day-job’ related project.
  2. Polish up my upcoming talk for Nodevember.

Neither of these things happened.

The draw of the 3D Printers, Raspberry Pi’s, various doodads and thingamajigs, and the energy of the crowd in general was more than enough to pull me away from any ‘real’ work. During the brainstorming session, I came up with an idea that would allow me to play with as many things as possible in the shortest amount of time…

The Project

The result of my weekend hackery was an internet connected ‘meter’ that would indicate the intensity of Tweets with a given hashtag – a.k.a. the Tweet-O-Meter.

For the demo, I had the needle move forward (clockwise) every time someone tweeted ‘HACKmemphis’. The needle would recede every few minutes if a tweet didn’t appear.

tweetthing1

Above is an early prototype that barely functioned Saturday evening. The software worked, and the meter indicator was a high quality mix of cardboard, paper and tape.

Below is the “finished” product with a freshly printed needle and a tablet based dial (that also displayed the most recent tweet). Photo credit: .

tweetthing2

During Sunday morning’s show and tell session, the HACKmemphis Tweet-O-Meter demo went off without a hitch. Mission complete!

The Nuts & Bolts

Hands down, the best aspect of this (or any) hackathon is the collaboration. Event though I largely conceived of and built the Tweet-O-Meter myself, I couldn’t have pulled it off without input and assistance from the other attendeeds.

The Raspberry Pi and servo were graciously donated by the HACKmemphis sponsor SparkFun, and the representatives from GitHub ( and ) lent a helping hand getting it wired up.

The software running the show was all open source. NodeJS provided the plumbing, and the Cylon.js library helped get the parts moving.

whipped up a custom 3D model for the meter in a frighteningly short amount of time. The Mid South Makers brought a phalanx of 3D printers and were gracious enough to print all the necessary bits and peices for the Tweet-O-Meter, including:

  • A custom stand / servo enclosure
  • A custom needle (inconspicuously shaped like a certain web angency logo)
  • A Raspberry Pi case

raspi

Also, for much of the weekend, I had a partner in crime (Jonathan L) building a mirror project with an Arduino instead of a R.Pi. Although we diverged quite a bit in our implementation, it was still very much a collaborative effort. There was plenty of back-and-forth and sharing of ideas.

Our code is up on GitHub for the curious:

Fellowship of the Nerds

Aside from smashing my fingers on keyboards, staring at glowing screens, and poking at Raspberry Pi pins, I did plenty of wandering around, eating, drinking, and carrying on with other attendees. There were plenty of familiar faces, but many new friendly folks to chat with as well. Thursday night’s GitHub sponsored drink-up was also super fun.

nerds

All-in-all, a great experience. The organizers did an amazing job, and I’ll be looking forward to next year.

JavaScript Powered Stuff

CODE | SLIDES

What Did I Just Watch in That Video?

That video was the culmination of me experimenting with a Spark Core device, the Johnny Five library, a Sphero toy, and the Cylon.js library (along with some other JavaScripty stuff).

Here’s what is really going on in the video…

My computer is running a Node.JS application of my own creation called owLED (available on GitHub).

Connected to my computer is a Spark Core device (basically a wireless Arduino). Attached to the Spark Core are 2 LED lights – one red, one greenish-white – and a pushbutton, all connected with some wires via a breadboard. See the amateurish image below for something that resembles what I put together.

owLED_bb

The owLED Node.JS application does a few things…

1. Serves a page with a picture of an owl at http://localhost:3000
2. Creates a Socket.IO connection between the browser and the Node app
3. Loads a Johnny-Five module that blinks the LED lights when the button is pushed, then emits an event when the blinking is complete (along with the on/off status of each LED).
4. Loads a Cylon.js module to connect to Sphero. The module exposes a function to change Sphero’s color, and roll Sphero ‘forward’ a short distance.

With all these pieces working in concert, we have a (crappy) little game! When I push the button on the Spark Core, the LED lights blink randomly for a couple seconds then stop. The LEDs can be either on or off when the blinking sequence ends.

Meanwhile, in the browser, players try to guess if the LEDs will be on or off when the blinking ends. They do this by turning the Owl eyes ‘on’ or ‘off’. If the owl eyes match the LEDs, a point is scored. If a point is scored (by any player) Sphero rolls forward!

owledpic

Why Did You Do This?

I went to JSConf US this year (2014) and got a Spark Core device in my swag bag. We also spent a whole day playing around with NodeBots (I did the NodeRockets track)! In the spirit of community and learning and what-not, I decided to demo some of the cool stuff I learned about to the local Memphis Tech community at a Meetup event. The OwLED Guessing Game was what I came up with. It demo’d some cool JS libraries, and took advantage of what hardware I had available.

The presentation was on July 17th, 2014 and kinda flopped. Despite tons of preparation and ensuring that everything would work right, it didn’t. I only had a 15 minute speaking slot, and the Arduino I was using refused to blink during the live demo. It worked just fine an hour earlier, and of course, still works fine now, but alas. Murphy’s law was in full effect.

I Want to Try, But Don’t Have a Spark Core or Sphero…

No worries. The OWL portion can work on its own with the ‘owled-fake.js’ module swapped out for the ‘owled.js’ module. The Sphero code is on a separate branch from master in the GitHub repo. Take a look at the instructions in the README.

Also, there is alternate code in the ‘owled.js’ module for a regular Arduino Uno. A hastily drawn diagram is below…

ArduinoOWLED_bb

Anything Else?

Here’s all the things I used in a big list!

  • Node.JS – Acts as the ‘hub’
  • Express – Serves the web page
  • Socket.IO – Allows events to be ’emmitted’ between the browser and the server in real-time.
  • AngularJS – A nice front-end framework to build the client-side functionality.
  • Redis – Used to keep track of all the blinking outcomes (click the ☰ icon in the browser).
  • Spark Core – The original hardware device used.
  • VoodooSpark Firmware – The firmware used on the Spark Core.
  • Johnny Five – The Node.JS lib to interact with Arduinos.
  • Spark-IO Node Package – Spark Core adapter for Johnny Five.
  • Sphero – A remote controlled, programmable, wireless sphere.
  • Cylon.js – A pretty badass project for controlling hardware with JavaScript.
  • Cylon-Sphero – A Cylon adapter to control Sphero.
  • Fritzing – Used to draw the Arduino diagrams.
  • NodeBots – For inspiration.

© 2018 Eric Terpstra

Theme by Anders NorénUp ↑