Authoreterps

A Sample CodeIgniter Application with Login And Session

Code Igniter App Screenshot

CODE | DEMO

UPDATE: This is the third of a three part series on CodeIgniter, Redis, and Socket.IO integration. Please be sure to read the other posts as well.
Part 1: You are here.
Part 2: Use Redis instead of MySQL for CodeIgniter Session Data
Part 3: Live Updates in CodeIgniter with Socket.IO and Redis

I was assigned to work on a PHP project a few weeks back that utilized the CodeIgniter framework. I’ve used MVC frameworks in the past in other languages, and because CodeIgniter does not deviate too far from common MVC patterns, it was pretty easy to pick up. To get myself up to speed, I put together a sample project based of what I learned from the documentation and a few tutorials. It’s a small microblogging app with very basic user auth and CRUD functionality. The basics of it go like this…

  • User login screen with basic password authentication.
  • Each user is assigned a ‘team’ number. Users can only view posts from their teammates.
  • An admin can view all posts from all users.
  • Each user has a ‘tagline’.
  • The tagline can be edited by clicking on it and typing something new (modern browsers only, as it uses ‘contenteditable’)
  • Hovering over a user’s avatar reveals the username and tagline for that user.
  • Admins can create new user accounts.
  • A user’s own messages appear below their profile. This is limited by 5 posts, currently.
  • Careful, there is not much form validation going on at the moment.

Building it gave me a good feel for the basic mechanics of CodeIgniter and allowed me to brush up on some PHP. The real motivation behind this project, however, is to eventually work in a Socket.IO implementation to allow real time updates for the user. I’ve been eyeing NodeJS for quite some time now, but never really had cause to use it. Fortunately, the project I was working on needed a more robust and scalable ‘real time’ framework to replace a rudimentary long-polling system. Socket.IO would have been a pretty good solution, in my opinion. Unfortunately, the project was cancelled before I could get started. But since I’ve already got the ball rolling on this sample application, I figure I might as well finish, and learn a few things in case a situation like this arises again. You never know…

The Setup

The front end of the application uses Twitter Bootstrap for styling and layout, jQuery for client-side interactivity, PHP and CodeIgniter for most of the functionality, and MySQL for data storage. This is a fairly common toolset that runs on most L/W/M/AMP-style environment stacks. The code available on GitHub has everything you need to run the app yourself, provided you have a web server, MySQL, and PHP 5.3 or greater.

You’ll need to create a database (preferably called ‘cisock’), and then import cisock.sql in the root of the ‘partOne’ folder in the repository. You can do this with the following command from the command line (be sure to change ‘yourusername’ and ‘/path/to/’ to match your local setup):

mysql -u yourusername -p -h localhost cisock < /path/to/cisock.sql

Once you’ve gotten the code checked out into your web root and the SQL imported, you’ll need to do some slight configuration before getting started. In ‘part_one/application/config/config.php’ you will need to change line 17 to reflect your local URL. If you simply cloned the project directly into your ‘localhost’ web root, then no changes will likely be needed. A similar fix is necessary in ‘part_one/assets/js/main.js’ line 6. The context root of your application goes here. Again, if you cloned to your web root, the default value should be fine. Ideally, the context root should only have to be configured in one place, but it is what it is for now.

Secondly, the settings in ‘part_one/application/config/database.php’ must be set to reflect your local database configuration. The following entries are specific to your local environment:

$db['default']['hostname'] = 'localhost';
$db['default']['username'] = 'yourdatabaseusernamehere';
$db['default']['password'] = 'yourdatabasepasswordhere';
$db['default']['database'] = 'cisock'; //or use the database name you chose, if it is different

Once that is done, you should be able to navigate your browser to the /part_one/ directory and see the login screen.

To recap:
  1. Clone the repo
  2. Set up the database and import the tables from cisock.sql
  3. Edit database.php and also config.php and main.js if necessary

The Code

I tried to follow suggested CodeIgniter conventions as closely as possible. As such, the file and directory structure is pretty much as it is out of the box. I altered /application/config/routes.php to set the login controller as the entry point for the application like so:

$route['default_controller'] = "login";

The index method in /application/controllers/login.php checks to see if the user is logged in, and if not will redirect to the login screen by calling the show_login function. The code immediately below is login/index – which will run when application’s url is loaded in the browser.

function index() {
    if( $this->session->userdata('isLoggedIn') ) {
        redirect('/main/show_main');
    } else {
        $this->show_login(false);
    }
}

Because the ‘isLoggedIn’ session variable starts out as false, the show_login() function is called. The ‘false’ argument indicates that an error message is not to be shown.

function show_login( $show_error = false ) {
    $data['error'] = $show_error;

    $this->load->helper('form');
    $this->load->view('login',$data);
}

That last line there: $this->load->view('login',$data); is what opens up the login view ( /application/views/login.php ). When the user types in credentials and clicks the ‘sign-in’ button, the login_user function is called through a normal form POST as indicated by this line of code in /application/views/login.php: echo form_open('login/login_user') ?>. The form_open function is part of CodeIgniter’s Form Helper, which generates the form’s HTML for you.

The login/login_user function gives us our first taste of a CodeIgniter model. It loads up an instance of a user model and calls the validate_user method on that model, passing it the email and password typed in by the user. Take a look at the code below for the entire sequence.

  function login_user() {
      // Create an instance of the user model
      $this->load->model('user_m');

      // Grab the email and password from the form POST
      $email = $this->input->post('email');
      $pass  = $this->input->post('password');

      //Ensure values exist for email and pass, and validate the user's credentials
      if( $email && $pass && $this->user_m->validate_user($email,$pass)) {
          // If the user is valid, redirect to the main view
          redirect('/main/show_main');
      } else {
          // Otherwise show the login screen with an error message.
          $this->show_login(true);
      }
  }

In the ‘user’ model, a couple of interesting things happen. First, a query is built using CodeIgniter’s ActiveRecord implementation. The username and password entered by the user are compared to the user table in the database to see if the credentials exist. If so, the corresponding record in the database will be retrieved. If that happens, the data retrieved from the database will be used to set session variables using the set_session function of CodeIgniter’s Session class. All the code for this is in /application/model/user_m.php and can be seen below.

var $details;

function validate_user( $email, $password ) {
    // Build a query to retrieve the user's details
    // based on the received username and password
    $this->db->from('user');
    $this->db->where('email',$email );
    $this->db->where( 'password', sha1($password) );
    $login = $this->db->get()->result();

    // The results of the query are stored in $login.
    // If a value exists, then the user account exists and is validated
    if ( is_array($login) && count($login) == 1 ) {
        // Set the users details into the $details property of this class
        $this->details = $login[0];
        // Call set_session to set the user's session vars via CodeIgniter
        $this->set_session();
        return true;
    }

    return false;
}

function set_session() {
    // session->set_userdata is a CodeIgniter function that
    // stores data in a cookie in the user's browser.  Some of the values are built in
    // to CodeIgniter, others are added (like the IP address).  See CodeIgniter's documentation for details.
    $this->session->set_userdata( array(
            'id'=>$this->details->id,
            'name'=> $this->details->firstName . ' ' . $this->details->lastName,
            'email'=>$this->details->email,
            'avatar'=>$this->details->avatar,
            'tagline'=>$this->details->tagline,
            'isAdmin'=>$this->details->isAdmin,
            'teamId'=>$this->details->teamId,
            'isLoggedIn'=>true
        )
    );
}

So now that the user is authenticated and their session info is set in a cookie, CodeIgniter will take an extra step and store the user’s IP address, session ID, user agent string and last activity timestamp in the database. So when the user logs out, closes the browser, or is idle for too long, the session will expire and be cleared from the database. If someone with a cookie containing the same session ID then tries to connect to the application, it will be invalid because it won’t match any of the sessions stored in the database. Check out the Session class documentation for a more thorough explanation.

So now that the user is logged in, the ‘main’ controller can do its thing. The show_main function runs just before loading the main view, and does a number of things to prepare for displaying the view to the user. The user’s details are used to change certain parts of the view, such as which posts to display (team only, or everyone) and the admin controls.

The show_main function grabs some data from the user’s session, retrieves all the user’s posted messages, counts the total number of messages posted by the user, and retrieves the messages from other users. All of this info is placed into the $data object and passed to the ‘main’ view (/application/views/main.php). Much of the heavy lifting is taken care of by ActiveRecord commands in the Post model (/application/model/post_m).

That’s about it, as far as logging in goes. Now that everything is set up using conventional CodeIgniter practices, I can begin the process of converting the server side session data to be stored in Redis, rather than in a MySQL table…

UPDATE:
See Part 2: Use Redis instead of MySQL for CodeIgniter Session Data
and Part 3: Live Updates in CodeIgniter with Socket.IO and Redis

My First jQuery Plugin: VintageTxt

VintageTxt Default

DEMO | SOURCE

Ever hearken back to the good ol’ days of the Apple II, with its monochrome screen and visible scanlines? No? Me either, really. I don’t even know what hearken means. But I do know that vintage looking stuff is cool, and people love it when all that is old is new again. So here is a jQuery plugin (new) that injects a green monochromatic scanlined fake computer screen (old) right onto any webpage. You can even make it type out a bunch of text, one character at a time – like that stupid computer from Wargames. No annoying voice-over though. And of course there is an input prompt, so you can bang away at your keyboard while pretending to break through 8 levels of top secret encryption to re-route the protocols through the mainframe and unleash your most advanced viruses… all while chanting, “.”

Why make this? Fun, naturally. Plus I’ve been wondering lately what it would be like to make a jQuery plugin. I know I’m a little late to the jQuery party, and all the cool kids have move on bigger and better things, but the ability to whip up a jQuery plugin seems like one of those skills a self-respecting web developer should acquire at some point. Two helpful guides stocked the pot, and I threw in a bit of my own spicy sauce as I went along. I’m fairly satisfied with the end result, and the knowledge gained. To play around with a fully functional demo, click the link or image above. But to have some real fun, go grab the source from GitHub and VintageTxt your face off!

Construct 2 HTML5 Mobile Game: Turkey Trot of Doom!!

So it’s a week before Thanksgiving and I get an email from the hostess of the dinner I plan on attending. Included was a list of dishes assigned for guests to bring. The usual suspects were present: stuffing, cranberry sauce, pie, lots of pie…

But when I finally got to my name, a ‘holiday themed video game’ was requested.

“Ha!” I thought, funny joke, can’t be done. But then I thought some more. Perhaps it could be done. I’ve always wanted to try making one of those newfangled mobile HTML5 games that all the kids are talking about, and here was the perfect motivation. To make things even more interesting, I decided to try out Construct 2 – a drag-n-drop game maker that exports to HTML5. I haven’t used anything of the sort since Klik ‘n Play back in 1995. Construct 2 is much, much cooler.

The interface is mostly intuitive, but reading the manual is definitely necessary to do anything meaningful. After a few hours of fiddling with the sample projects and reading bits and pieces from the manual and a few tutorials I was confident enough to at least begin my own project. After I got started, many grand ideas popped into my head, but there just wasn’t time. I had an hour here, and an hour there to work, and Turkey Day was quickly approaching. In the end, I managed to get one major feature from my wishlist working – accelerometer controls. When playing the game on a mobile browser, the player moves by tilting the device. It works pretty well on a modern iPhone (4, 4s, 5) and iPads. Doesn’t work so great on Android, but you can still get the gist of it.

Rather than go into too much detail on how the game was assembled within Construct 2, I’ll just post the game file for download. There’s nothing terribly complicated going on, and the way the event sheets are organized, it reads almost like a book. You can download the file here.

To play the game, simply visit http://turkey.thebogstras.com. If you are using a mobile device, make sure it is in landscape orientation with the home button on the right. It takes a while to get used to the tilt controls, and you will probably die very quickly. If playing on a desktop browser, click the mouse where you want the character to move.

Remember, this game was never intended to be fun. It’s sole purpose was to be a game, and have some Thanksgivingy stuff in it. Fun was never a requirement.

Credits for artwork:
Boss Turkey – Puppet Nightmares
Turkey Leg – Kenj
Mini Turkey – MikariStar
Evil Turkeys – DMN666

AngularJS SignIt! – Custom directives and form controls

Note: This is a companion post to Example CRUD App – Starring AngularJS, Backbone, Parse, StackMob and Yeoman. If you haven’t read that yet, please do so, otherwise this might not make much sense.

The most prominent feature, by far, of AngularJS SignIt! is the signature pad. It’s a fixed-size canvas that can be drawn upon with a mouse pointer or finger. The code for this wonderful widget is provided as a jQuery plugin by Thomas Bradley. There are a few more features for the signature pad than I use in this app, and I encourage you to check out the full documentation if you get a chance. It’s pretty great.

In order to implement the signature pad into my form, It must be a required field, and have the data included with the other fields when the form is submitted. The sig pad works by creating objects of x/y coordinates that correspond to marks on the canvas. When drawing on the pad, all that data gets set into a hidden form field. The data from that field is what needs to get into the Angular scope, and get validated by Angular’s form validation routine. Oh, and Angular somehow needs to fire the custom validation function built into the signature pad.

Luckily, this is the type of thing directives were built for. Custom directives let you create custom html tags that get parsed and rendered by Angular. So to start out, I created a custom directive called sigpad that is used by placing </sigpad> in my HTML. The sigpad tag then gets replaced by the directive’s template HTML. The template for the HTML5 signature pad is just the default snippet taken directly from the signature pad documentation. See below:

<div class="control sigPad">
  <div class="sig sigWrapper">
    class="pad" width="436" height="120" ng-mouseup="updateModel()"></canvas>
  </div>
</div>

The logic for the directive is defined in a separate Angular module (see directives.js for full code). Take a look at the code below, and be sure to review Angular’s documentation on directives. The Angular docs give a great conceptual overview, but the examples are a bit lacking. I reviewed the docs many, many times, and still needed some help from the to get this working correctly (thanks P.B. Darwin!).

My biggest stumbling block was not using ngModelController, and instead trying to manipulate scope data directly. When working with a form, Angular provides all sorts of awesome code to work with data and do some validation. Basically, it all went down like this…

Insert the sigpad directive into the form with

='user.signature' clearBtn=".clearButton" name="signature" required></sigpad>

Now Angular knows to replace the sigpad element with the template defined earlier. After this happens, Angular runs the linking function which contains all the logic for the directive. The little snippet shown below is in the linking function. It uses jQuery to select the template element (passed into the linking function as ‘element’) and runs the signaturePad function from the signature pad API. This is what creates the actual, drawable canvas.

var sigPadAPI = $(element).signaturePad({
                             drawOnly:true,
                             lineColour: '#FFF'
                           });

The ng-model=’user.signature’ bit is key. This is how data is shared between the signature pad and Angular. Also, go back up and look at the template. You will see ng-mouseup="updateModel() as an attribute of the canvas element. This tells Angular to run the updateModel() function when your mouse click ends on the signature pad. The updateModel() function is defined in the linking function of the sigpad directive.

When the updateModel() function is executed, it will wait for a split second so the signature pad can finish writing data to its hidden form field, then it will assign all that data to the Angular model value of the directive. Sounds confusing, and it is. The signature pad is off doing its own thing, completely oblivious to the fact that it is in an AngularJS app. It is Angular’s responsibility to grab data from the sigpad to get that data into its own scope. That is what $setViewValue is for. It hands the signature data over to Angular, so when the form is submitted, Angular has it available in scope.

Below is the entire directive for the drawable signature pad. You can see that it relies heavily on the signature pad API, but only after certain events handled by Angular have occurred.

.directive('sigpad', function($timeout){
  return {
    templateUrl: 'views/sigPad.html',   // Use a template in an external file
    restrict: 'E',                      // Must use element to invoke directive
    scope : true,                       // Create a new scope for the directive
    require: 'ngModel',                 // Require the ngModel controller for the linking function
    link: function (scope,element,attr,ctrl) {

      // Attach the Signature Pad plugin to the template and keep a reference to the signature pad as 'sigPadAPI'
      var sigPadAPI = $(element).signaturePad({
                                  drawOnly:true,
                                  lineColour: '#FFF'
                                });
     
      // Clear the canvas when the 'clear' button is clicked
      $(attr.clearbtn).on('click',function (e) {
        sigPadAPI.clearCanvas();
      });
     
      $(element).find('.pad').on('touchend',function (obj) {
        scope.updateModel();
      });

      // when the mouse is lifted from the canvas, set the signature pad data as the model value
      scope.updateModel = function() {
        $timeout(function() {
          ctrl.$setViewValue(sigPadAPI.getSignature());
        });
      };      
     
      // Render the signature data when the model has data. Otherwise clear the canvas.
      ctrl.$render = function() {
        if ( ctrl.$viewValue ) {
          sigPadAPI.regenerate(ctrl.$viewValue);
        } else {
          // This occurs when signatureData is set to null in the main controller
          sigPadAPI.clearCanvas();
        }
      };
     
      // Validate signature pad.
      // See http://docs.angularjs.org/guide/forms for more detail on how this works.
      ctrl.$parsers.unshift(function(viewValue) {
        if ( sigPadAPI.validateForm() ) {
          ctrl.$setValidity('sigpad', true);
          return viewValue;
        } else {
          ctrl.$setValidity('sigpad', false);
          return undefined;
        }
      });      
    }
  };
})

And what about the tiny signatures that show up in the signatories list? Also a custom directive. This one is smaller, but still tricky. The signature is displayed as an image on-screen, but a canvas element is still required to generate the signature from raw data before it can be converted to an image.

The directive is implemented with ={{signed.get('signature')}}></regensigpad>. The ‘signed’ value is a single signature in the signature collection pulled from the back-end when the user picks a petition. the signature data from signed is passed into the directive scope using scope: {sigdata:'@'}.

When a list of signatures is retrieved, each signature record (including first & last name, email, and signature data) goes into a table row using ngRepeat. The regensigpad directive is executed for each row. The linking function will create a canvas element and make a displayOnly signature pad from it. The signature drawing is regenerated from the data, and then the canvas is converted to PNG format.

This PNG data is then used in the pic scope value, which is bound to the ng-src of an img tag. This img tag is the directive’s template, and will be inserted into the page. The full code for this directive is below.

.directive('regensigpad',function() {
  return {
    template: '',
    restrict: 'E',
    scope: {sigdata:'@'},
    link: function (scope,element,attr,ctrl) {
      // When the sigdata attribute changes...
      attr.$observe('sigdata',function (val) {
        // ... create a blank canvas template and attach the signature pad plugin
        var sigPadAPI = $('
'
).signaturePad({
                          displayOnly: true
                        });
        // regenerate the signature onto the canvas
        sigPadAPI.regenerate(val);
        // convert the canvas to a PNG (Newer versions of Chrome, FF, and Safari only.)
        scope.pic = sigPadAPI.getSignatureImage();
      });
    }
  };
});

But that’s not all! You might have noticed that the select box holding the names of each petition looks kinda fancy, and allows you to type stuff to filter the list. This fancy form control is the select2 widget which is based of the Chosen library.

I didn’t have to write my own directive for it though. The Angular-UI project has already done the honors. Angular-UI is an open-source companion suite for AngularJS. It provides a whole pile of custom directives, and even a few extra filters. Many of the directives are wrappers for other widgets and open source projects, like CodeMirror, Google Maps, Twitter Bootstrap modal windows, and many more. It’s definitely worth looking into for any AngularJS project.

AngularJS SignIt! – Interchangeable Parse, StackMob and Backbone Services

Note: This is a companion post to Example CRUD App – Starring AngularJS, Backbone, Parse, StackMob and Yeoman. If you haven’t read that yet, please do so, otherwise this might not make much sense.

The AngularJS SignIt! application basically has three different interactions with a web service – fetch petitions, save a signature, and fetch a list of signatures based on the selected petition. That’s it – a get, a save, and a query. Initially, I was only using Parse.com to store data, so it was possible to include Parse specific objects and methods in my controller to save and get data.

But then I remembered I have a StackMob account just sitting around doing nothing, and thought I should put it to good use. So now I have two (slightly) different options to store my signatures. Rather than jumbling up my controller with code specific to StackMob and Parse, I created a module to abstract the Parse and StackMob APIs into their own services. These services could then hide any code specific to Parse or StacMob behind a common interface used by the controller.

With the back-end(s) abstracted, all the controller needs to worry about is calling saveSignature, getPetitions, and getSignatures. Below is a severely truncated version of the Main Controller that shows the three methods in use. Notice there is no mention of Parse or StackMob.

var MainCtrl = ngSignItApp.controller('MainCtrl', function($scope,DataService) {

  // GET A LIST OF SIGNATURES FOR A PETITION
  $scope.getSignatures = function getSignatures (petitionId) {
    DataService.getSignatures(petitionId,function (results) {
      $scope.$apply(function() {
        $scope.signatureList = results;
      });
    });
  };

  // SAVE A SIGNATURE
  $scope.saveSignature = function saveSignature() {  
    DataService.saveSignature($scope.user, function() { //user is an object with firstName, lastName, email and signature attributes.
      $scope.getSignatures($scope.select2); //select2 is the value from the petition dropdown
    });  
  };

  // GET ALL PETITIONS
  DataService.getPetitions(function (results) {
    $scope.$apply(function() {
      $scope.petitionCollection = results;
      $scope.petitions = results.models;
    });
  });

});

If you look closely, you’ll see that each service method is prefixed with DataService. This is the injectable that provides either the StackMob service, or Parse service to the controller. Each of those services has an implementation of the getSignatures, saveSignature, and getPetitions. Take a look:

angular.module('DataServices', [])
// PARSE SERVICE
.factory('ParseService', function(){
    Parse.initialize("", "");
    var Signature = Parse.Object.extend("signature");
    var SignatureCollection = Parse.Collection.extend({ model: Signature });
    var Petition = Parse.Object.extend("petition");
    var PetitionCollection = Parse.Collection.extend({ model: Petition });

    var ParseService = {

      // GET ALL PETITIONS
      getPetitions : function getPetitions(callback) {
        var petitions = new PetitionCollection();
        petitions.fetch({
          success: function (results) {
              callback(petitions);
          }
        });
      },

      // SAVE A SIGNATURE
      saveSignature : function saveSignature(data, callback){
        var sig = new Signature();
        sig.save( data, {
                  success: function (obj) {callback(obj);}
        });
      },

      // GET A LIST OF SIGNATURES FOR A PETITION
      getSignatures : function getSignatures(petitionId, callback) {
        var query = new Parse.Query(Signature);
        query.equalTo("petitionId", petitionId);
        query.find({
          success: function (results) {
            callback(results);
          }
        });
      }
   
    };

    return ParseService;
})
// STACKMOB SERVICE
.factory('StackMobService', function(){
    // Init the StackMob API. This information is provided by the StackMob app dashboard
    StackMob.init({
      appName: "ngsignit",
      clientSubdomain: "",
      publicKey: "",
      apiVersion: 0
    });

    var Signature = StackMob.Model.extend( {schemaName:"signature"} );
    var SignatureCollection = StackMob.Collection.extend( { model: Signature } );
    var Petition = StackMob.Model.extend( {schemaName:"petition"} );
    var PetitionCollection = StackMob.Collection.extend( { model: Petition } );

    var StackMobService = {
     
      getPetitions : function getPetitions(callback) {
        var petitions = new PetitionCollection();
        var q = new StackMob.Collection.Query();
        petitions.query(q, {
          success: function (results) {
              callback(petitions.add(results));
          },
          error: function ( results,error) {
              alert("Collection Error: " + error.message);
          }
        });        
      },    

      saveSignature : function saveSignature(data, callback){
        var sigToSave = new Signature();
        sigToSave.set({
          firstname: data.firstName,
          lastname: data.lastName,
          petitionid: data.petitionId,
          email: data.email,
          signature: JSON.stringify(data.signature) //Also, StackMob does not allow arrays of objects, so we need to stringify the signature data and save it to a 'String' data field.
        });

        // Then save, as usual.
        sigToSave.save({},{
          success: function(result) {
            callback(result);
          },
          error: function(obj, error) {
            alert("Error: " + error.message);
          }
        });
      },

      getSignatures : function getSignatures(petitionId, callback) {
        var signatures = new SignatureCollection();
        var q = new StackMob.Collection.Query();
        var signatureArray = [];

        q.equals('petitionid',petitionId);

        signatures.query(q,{
          success: function(collection) {
            collection.each(function(item) {
              item.set({
                signature: JSON.parse(item.get('signature')),
                firstName: item.get('firstname'),
                lastName: item.get('lastname')
              });
              signatureArray.push(item);
            });
            callback(signatureArray);
          }
        });
      }
   
    };
    // The factory function returns StackMobService, which is injected into controllers.
    return StackMobService;
})

This is an abridged version of the DataServices module. To see the full code, as well as many more comments explaining the code, head over to GitHub. The main point to observe here is that each service has slightly different code for getSignatures, getPetitions, and saveSignature. Also, each service has its own initialization code for its respective back-end service. The controller could care less though, because as long as the service methods accept and provide data in the right format, it’s happy.

But how does the controller know which service to use? Well, if you paid attention to the controller code, you’ll see that ‘DataService’ is injected, which is not defined yet. In the full code, there is a service defined all the way at the bottom of the file. It looks like this:

.factory('DataService', function (ParseService,StackMobService,BackboneService,$location) {
  var serviceToUse = BackboneService;
  if ( $location.absUrl().indexOf("stackmob") > 0 || $location.absUrl().indexOf("4567") > 0 ) serviceToUse = StackMobService;
  if ( $location.path() === '/parse' ) serviceToUse = ParseService;

  return serviceToUse;
});

All the other services (ParseService, StackMobService, and BackboneService) are injected into this service. In case you are wondering, BackboneService is yet another back-end service that can be used in place of the others – see the full code for details. The code above simply examines the URL and decides which service get injected via DataService. If ‘parse’ appears as the url path (e.g. www.example.com/app/#/parse), then ParseService is returned. StackMob requires that HTML apps be hosted on their servers, so just check the domain name for ‘stackmob’ and return the StackMob service. If neither of these conditions occur, then the BackboneService is returned, and no data is saved.

In retrospect, I think what I’ve got here is the beginnings of an OO-like interface – where a set of functions are defined to form a contract with the client ensuring their existence. And the implementations of that interface are a set of adapters (or is it proxies?). If I had to do this over, I would use a proper inheritance pattern with DataService as the abstract parent, and the other services as the implementations or subclasses. One of these days I’ll get around to refactoring. One day…

Full Source: https://github.com/ericterpstra/ngSignIt

Example CRUD App – Starring AngularJS, Backbone, Parse, StackMob and Yeoman

AngularJS SignIt!

After finishing my first experimental project in AngularJS, I wanted to try saving data, rather than just retrieving it. Angular has some great mechanisms for creating forms, so I put together a small application to test out more of what Angular has to offer. Oh, and I also wanted to build this using Yeoman. More on that in a minute.

The app I ended up with is a petition signer. A user chooses a petition from the select-box, enters a first name, last name and email into the form, and then signs the signature pad either by drawing with the mouse, or using a finger on a touchscreen. Hitting the Save button will capture the information, and add it to the list of signatories for the selected petition. Pretty simple, no?

The really fun part comes when the data is saved. Depending on the url, the application will choose a place to save the data. If ‘parse’ shows up in the url path, then Parse.com is used as the back-end. If the application is hosted on StackMob’s server, then data is stored with StackMob.com. If neither of these things occur, then data is just saved to a temporary Backbone collection and cleared out when you refresh the page.

Try it out!

Please note that a modern browser is needed. This will not work on IE8 or lower. I have only tested it in the latest Chrome, Firefox 15, and Safari on iOS 5. YMMV with other browsers. If I find the time, I’ll add in the fixes for IE later on.

Get the Code: https://github.com/ericterpstra/ngSignIt

A few more things…

Aside from AngularJS, Parse, and StackMob, I threw in plenty of other nifty libraries and snippets to get this thing working and looking (kinda) nice.

  • The select2 component, wrapped in a directive provided by Angular-UI. Angular-UI is a great add-on library for AngularJS that provides some neat widgets and additional functionality to directives.
  • HTML5 SignaturePad jQuery plugin from Thomas Bradley.
  • responsive stylesheet (try it on mobile!)
  • A font-face generated from fontsquirrel.com using Estrya’s Handwriting font.
  • Stack of paper CSS3 snippet from CSS-Tricks.com

And, of course, Yeoman was involved. There is tons of functionality built into Yeoman for handling unit-tests, live compilation of SASS and CoffeeScript, dependency management, and other stuff. I basically just used it for it’s AngularJS scaffolding, and to download some libraries via Bower. It went something like this:

yeoman init angular ngSignIt
cd ngSignIt
yeoman install jquery
yeoman install json2
... etc ...

Then I went about building my application as usual. Upon completion, I ran

yeoman build

to package everything that is needed into the ‘dist’ folder, and plopped that on my web server. Done.

// TODO

In upcoming blog posts, I’ll go over some of the more interesting bits of code – namely the services module and custom directives. There are a fair number of comments in the code now, but I think a higher-level overview would add some value. Stay tuned.

UPDATE!

Here are the links to the follow-up articles:

The $50 JavaScript Dev. Server – Node .8, Yeoman, and Cloud9 on ARM

MK802 II

I recently bought a Mini PC. It’s one of those ‘Android on a stick‘ devices from China, that cost anywhere from $50 – $90. This one in particular is a . It’s got an Allwinner A10 System on a Chip (ARM Cortex-A8, Mali400 Gfx), 1GB DDR3, and a 4GB of onboard storage with Android 4.0 pre-installed. It came with some USB adapters and a nice box, all for $55 shipped. The best part is, you can slip in a micro SD card with an ARM compatible Linux image, and it will boot!

Linaro 12.07

Miniand.com has several SD card images ready to download for this particular device, and I settled on a custom build of Linaro. It’s an Ubuntu 12.04 derivative with support for ARM devices. The custom build on Miniand has Mali 400 gfx drivers, and apparently a bunch of ARM related kernel optimizations and drivers for the MK802.

It was super easy to install – just download, unzip, and write the image to the card using a single dd command.
File: linaro-alip-armhf-t4.7z
Commands:

dd if=/dev/zero of=/dev/sdz bs=1M count=16 conv=fsync
dd if=linaro-alip-armhf-t2.img of=/dev/sdz bs=16k conv=fsync

NodeJS 0.8.11 and Yeoman

With the MK802’s primary function being a web development sever, my first task after getting the OS up and running was installing Yeoman. Linaro includes apt-get, so the installation process was mostly identical to what I have outlined in this earlier post. There were a couple key differences, though, that nearly derailed the installation.

Because most default build configs are set up for x86 systems, I had to take an alternate route with Git, NodeJS and PhantomJS due to the ARM based architecture of the MK802. For Git and Phantom, I went with the pre-compiled packages available in the apt repository. I won’t get bleeding edge versions, but from what I can tell so far, the repo versions work just fine.

Except Node. The apt repository installs NodeJS 0.6, and does not make 0.8 available. Problem is, Yeoman requires 0.8, and the default NodeJS 0.8.11 build instructions fail on the MK802. Luckily, there’s a pretty solid community of very-tiny-pc enthusiasts, and I was able to find a solution on . All that’s needed is adding a couple lines to a file before building.

Open up /deps/v8/build/common.gypi and change:

{
  'variables': {
    'use_system_v8%': 0,
    'msvs_use_common_release': 0,
    ...

to this:

{
  'variables': {
    'armv7%':'1',
    'arm_neon%':'1',
    'use_system_v8%': 0,
    'msvs_use_common_release': 0,
    ....

Compiling NodeJS took about an hour on the MK802. I think there were some additional arguments for the make command, but unfortunately I didn’t write those down. They probably weren’t that important anyway. Sorry :P

Cloud9

Here’s where things got really hairy. Cloud9 just did not want to cooperate. In retrospect, it really only came down to one (maybe two) issues, but working through the entire installation took the better part of my Saturday.

It all started with the ‘sm’ module from npm. Apparently ‘sm’ is the smallmint package manager used by Cloud9 to keep its own node packages separate from those of npm. Installing sm via npm went OK, but downloading the Cloud9 source with sm failed. I had to use git instead.

Download and install Cloud9:

git clone https://github.com/ajaxorg/cloud9.git cloud9
cd cloud9
sm install

A ton of scripts and package installations executed, and I eventually ended up with an error like this:

Checking for program g++ or c++          : /usr/bin/g++
Checking for program cpp                 : /usr/bin/cpp
Checking for program ar                  : /usr/bin/ar
Checking for program ranlib              : /usr/bin/ranlib
Checking for g++                         : ok  
Checking for node path                   : not found
Checking for node prefix                 : ok /usr/local
'configure' finished successfully (0.970s)
Waf: Entering directory `/home/root/o3/build'
[1/3] cxx: hosts/node-o3/sh_node.cc -> build/default/hosts/node-o3/sh_node_1.o
16:01:40 runner system command -> ['/usr/bin/g++', '-g', '-O3', '-msse2', '-ffast-math', '-fPIC', '-DPIC', '-D_LARGEFILE_SOURCE', '-D_FILE_OFFSET_BITS=64', '-D_GNU_SOURCE', '-DEV_MULTIPLICITY=0', '-Idefault/include', '-I../include', '-Idefault/hosts', '-I../hosts', '-Idefault/modules', '-I../modules', '-Idefault/deps', '-I../deps', '-I/usr/local/include/node', '../hosts/node-o3/sh_node.cc', '-c', '-o', 'default/hosts/node-o3/sh_node_1.o']
cc1plus: error: unrecognized command line option "-msse2"

Luckily, I discovered a related issue reported on GitHub. The libxml submodule is the offending culprit. The module depends on Ajax.org’s o3 project, which cannot compile on ARM as-is. The problem has something to do with ‘msse2’ not being present on ARM v7 CPUs. I’m not entirely sure what that means, but fortunately, someone has figured out how to modify o3 and libxml so they will build properly.

I checked out a customized fork of libxml into cloud9/node_modules/ and ran the build script (build.sh). Then I re-ran sm install for Cloud9 and everything went off without a hitch.

Thanks to Ian Corbitt’s blog and libxml fork. Below are the useful commands that got things going for me (start from the cloud9 directory):

cd node_modules
git clone --recursive git://github.com/iancorbitt/node-libxml.git libxml
cd libxml
./build.sh

After the installation was complete, I could start Cloud9 with ./bin/cloud9.sh -w /var/www/myProject. Then firing up a web browser and going to localhost:3131 brought me to the IDE.

Access Cloud9 Remotely

Running a web browser on the MK802 is painfully slow, and I don’t always have physical access to it anyway. I would much rather use my laptop or desktop to run the Cloud9 client. The first time I attempted to open Cloud9 from a remote computer, I was very dissappointed. All I got was a 503 error with no information. Turns out Ubuntu does not allow connections willy-nilly on random ports.

Enter Apache and reverse proxying. I never knew about this before, but I’m glad I ran across it. With a simple VirtualHost file entry, Apache can accept requests for specified domains & ports, and forward them elsewhere. So in my remote browser, I can call the IP address or domain alias of the Cloud9 server over port 80, and Apache will forward that requests to localhost:3131. Great!

All that’s needed are a couple Apache mods enabled

sudo a2enmod proxy_http
sudo a2enmod proxy
sudo services apache2 restart

Then create an empty file in /etc/apache2/sites-enabled, fill it with the information below, and restart Apache:


    ServerName c9.mk802
    ProxyPreserveHost On
    ProxyPass / http://localhost:3131/
    ProxyPassReverse / http://localhost:3131/

Then in the /etc/hosts file of my remote computer (where I will be using Cloud9), I add the entry: c9.mk802 192.168.1.xxx (where xxx is the last 3 digits of my mk802 IP address).

Opening up Chrome and tapping http://c9.mk802 into the address bar brought up the Cloud9 interface running on my tiny PC in another room. Excellent!

Except I couldn’t type anything or create any new files. Bummer :(

Cloud9 had started in Read-Only mode. After a bunch of trial and error, I finally found some advice stating that local installations of cloud9 would only be fully accessible to a single user via the host specified in cloud9/configs/default.js.

Open default.js and change all instances of “localhost” to “0.0.0.0”. Restart Cloud9, and all should be good and merry. Just make sure that Cloud9 is safely tucked away, as this really is not very secure.

It was pretty satisfying being able to create a fully functional development environment with two commands:

/var/www/myProject/yeoman init angular
~/Applications/cloud9/bin/cloud9.bin -w /var/www/myProject/

Now whether standing at my beefy desktop, or lounging with my li’l notebook, I can access the same codebase with the same IDE. Pretty nifty!

Sieve of Eratosthenes Visualized in Real Time with Python

Spoiler Alert! This article reveals the answer to Project Euler problem #10.


Try it!

A few months ago I worked through the first 10 problems of Project Euler. I did OK up to problem #10. The question involved finding all the prime numbers up to 2,000,001. There were earlier questions that involved finding primes, and I cobbled together some javascript with the help of underscore.js that got the job done – albiet very slowly. My homespun solution was just too durn slow for problem #10. I turned to the internet, and it provided me a blazing fast solution. See below.

var ten = function sieve(max) {
    var D = [],
        primes = [],
        sum = 0;
    for (var q=2; q < max; q++) {
    if (D[q]) {
      for (var i=0; i < D[q].length; i++) {
        var p = D[q][i];
               
        if (D[p+q])
           D[p+q].push(p);
        else
           D[p+q]=[p];
      }
      delete D[q];
    } else {
            sum += q;
      primes.push(q);
      if (q*q<max)
         D[q*q]=[q];
    }
  }
  return sum;
 }(2000001);

So yeah, I cheated. At the time, I had no idea how this code worked, I just knew that it did. Since then, I’ve occasionally revisited this code to take another shot at figuring out what this mysterious D array is doing with its Ps and Qs. However, trace after trace and many breakpoints later I was still scratching my head.

But just yesterday I stumbled across Python Tutor. This site is simply amazing. These geniuses have developed an interactive online code editor that spits out pictures based on what your code is doing. You even get little playhead controls to step through each line! This wizardry finally allowed me to scratch the itch that’s been bugging me for a while.

I returned to the site where I ‘borrowed’ the JavaScript algorithm for the Sieve of E., and found the Python equivalent. The syntax is a bit different, but it essentially doing the same thing. The code looks like this:

def eratosthenes(maxnum):
   D = {}  # map composite integers to primes witnessing their compositeness
   q = 2   # first integer to test for primality
   while q <= maxnum:
       if q not in D:
           yield q       # not marked composite, must be prime
           D[q*q] = [q]  # first multiple of q not already marked
       else:
           for p in D[q]:  # move each witness to its next multiple
               D.setdefault(p+q,[]).append(p)
           del D[q]        # no longer need D[q], free memory
       q += 1

for p in eratosthenes(19): print p

The comments were helpful in determining that it was doing the same thing as the JS algorithm, but when I plugged this code into Python Tutor, WHAMMO! It hit me like a ton of bricks. D is a dictionary!!! D is where non-prime numbers are stored and created from prime factors.

So we iterate over all the numbers from 1 to whatever, with q as the iterator index. For each value of q, check the dictionary (D). If that value does not exist in D, then the number is prime. For each prime number, find the square and store it in the dictionary for later. So when q = 2, the first entry in D is 4. Then when q = 3, stick 9 in the dictionary. Also, for each dictionary entry created by squaring a prime, attach that prime as a child object.

When q is not prime, examine the dictionary entry. Take the child, add it to the dictionary value, and create a new entry in the dictionary for the sum. So when q = 4, its child value is 2. Take 2 + 4 and get 6. Create a dictionary entry for 6, and store the original prime factor (2) as its child. Then delete the previous dictionary entry (4). Keep this up until q is larger than your target value (2000001).

This part took me the longest to figure out, and it makes a lot more sense by skipping the deletion part. In the Python Tutor example, comment out the line del D[q]. When running the visualizer, the dictionary keeps growing and it is much easier to see how each entry is created, and why each entry is composite rather than prime. For each square value, its root is added over and over again until q reaches its limit. Deleting dictionary entries is not absolutely necessary with low values of q. It’s a memory saver that keeps D from getting out of hand (and crashing Chrome when trying this in JavaScript).

I know I did a sub-par job of explaining this, but seriously, that’s what Python Tutor is for. Try it, you’ll like it.

Install Yeoman and all its dependencies in Ubuntu Linux

Yeoman is awesome, but holy jeez are there a lot of requirements to get it fired up for the first time. Listed below are all the commands I typed in to get every last Yeoman dependency installed. I did this on 64-bit Ubuntu. For 32-bit, you’ll need to download a different version of PhantomJS, but otherwise I’m pretty sure everything else works the same.

CURL

sudo apt-get update
sudo apt-get install curl

GIT

reference: http://evgeny-goldin.com/blog/3-ways-install-git-linux-ubuntu/

sudo apt-get install libcurl4-gnutls-dev libexpat1-dev gettext libz-dev libssl-dev
wget --no-check-certificate https://github.com/git/git/tarball/v1.7.12.2
tar -xvf v1.7.12.2
cd git-git-04043f4/
make prefix=/usr/local all
sudo make prefix=/usr/local install
git --version
rm -rRf git-git-04043f4
rm v1.7.12.2

NodeJS

reference: https://github.com/joyent/node/wiki/Installation

sudo apt-get install python-software-properties
sudo add-apt-repository ppa:chris-lea/node.js
sudo apt-get update
sudo apt-get install nodejs npm

RVM + Ruby 1.9.3

reference: http://ryanbigg.com/2010/12/ubuntu-ruby-rvm-rails-and-you/
You might want to grab a coffee. This is a long one.

sudo apt-get install build-essential
curl -L get.rvm.io | bash -s stable
echo '[[ -s "$HOME/.rvm/scripts/rvm" ]] && source "$HOME/.rvm/scripts/rvm"' >> ~/.bashrc
. ~/.bashrc
sudo apt-get install build-essential openssl libreadline6 libreadline6-dev curl git-core zlib1g zlib1g-dev libssl-dev libyaml-dev libsqlite3-dev sqlite3 libxml2-dev libxslt-dev autoconf libc6-dev ncurses-dev automake libtool bison subversion
rvm install 1.9.3
rvm use 1.9.3
rvm --default use 1.9.3-p194
ruby -v

Compass

reference: http://compass-style.org/install/

gem update --system
gem install compass

PhantomJS

Using apt-get will get you v. 1.4.0. The method below gets the latest version. For 32-bit, just remove ‘_64’ from each command.
reference: http://devblog.hedtek.com/2012/04/phantomjs-on-ubuntu.html

cd ~/
wget http://phantomjs.googlecode.com/files/phantomjs-1.7.0-linux-x86_64.tar.bz2
tar -xvf
cd /usr/local/share
sudo tar xvf ~/phantomjs-1.7.0-linux-x86_64.tar.bz2
sudo ln -s /usr/local/share/phantomjs-1.7.0-linux-x86_64/ /usr/local/share/phantomjs
sudo ln -s /usr/local/share/phantomjs/bin/phantomjs /usr/local/bin/phantomjs
phantomjs --version
rm ~/phantomjs-1.7.0-linux-x86_64.tar.bz2

JPEGTRAN / OptiPNG

sudo apt-get install libjpeg-turbo-progs
sudo apt-get install optipng

YEOMAN!!!

(finally!)

sudo npm install -g yo grunt-cli bower

And we’re done.

Angular Cats! An AngularJS Adventure Anthology

Over the past couple weeks I’ve been teaching myself about the AngularJS javascript framework. It’s gone pretty well so far, and I think I have a pretty decent handle on the basics. I’ve been able to use the tools provided to put together a small application that uses the Petfinder.com API as a backend. It is a remake of an old Flex application I built in 2009 for the House of Mews pet shelter in Memphis, TN. The application allows the user to browse pictures and information on all the adoptable cats currently at the shelter. See the application in context at www.houseofmews.com (and please excuse the background music). You can also view the app by itself here.

In addition to remaking the old app, I took the remodeling a few steps further in order to try out a few more AngularJS features. The newer version of application stands on its own and has a few added features – most notably, deep linking.

While working on this, I created a series of blog posts to chronicle my progress (and show off). There are five posts in total, with the first three focused on the ‘remake’ app, and the last two posts detailing parts of the ‘remodeled’ app. If you are interested in learning AngularJS for yourself, definitely pay attention to the latter posts. The first three posts are choc full of newbie mistakes and bad practices. They are a great example of how not to build an Angular application.

All of the code is available on GitHub. I’ve already tweaked the code a bit, so it may not match 1:1 with the code pasted into the blog posts, but everything is commented and should be relatively easy to follow. Anyway, thanks for paying attention! Enjoy!

Parts 1 – 3: House of Mews redux (source: ngCatsHOM)

Parts 4 & 5: Angular Cats! (source: ngCats)

Older posts Newer posts

© 2018 Eric Terpstra

Theme by Anders NorénUp ↑