Monday, September 21, 2015

Learning ECMA Script 6: Arrow Functions

If you are familiar with a language other than JavaScript, there is a good chance that language already supports something similar to => functions.  In ES6 they are called arrow functions (or FAT arrow functions) and 9 times out of 10 they replace anonymous functions.  There is one major case where they differ, and I will get to that in my last point.




The first way to use an arrow function is the same any way you would use function (param1, param2, ...) { ... }.  Instead of that you would write the following:

   
(param1, param2, ...) => { ... }; 

It is pretty straight forward and it is shown in the example below:

var closureVariable = 'foo';
var test1 = () => {
    var innerVariable = 'bar';
    // look ma!  Closure still works.
    return closureVariable + innerVariable;
};

document.getElementById('test1').innerHTML = 'Test 1: ' + test1();


As you can see, no magic there.  It is simply replacing function (param...) with (param...) =>.  Most places you have anonymous functions are in callbacks and promises.  While it was super weird at first using them I found my code to be easier to read now (especially jasmine unit tests) and rarely write the word function anymore.

Now if you have a function that is one line and returns something, you can make the arrow function one line.  Imagine you were using the lodash function map that will "map" your current array into another one.  You just need to supply the function to transform each object.  Imagine you just wanted a property out of that array like name.  Before you would need something like this:

    
_.map(list, function(item){ return item.name;});


Now you can write this:

    
_.map(list, item => item.name);


item.name is implicitly returned since it is not surrounded with curly braces.



So you might ask, that's fine, but what if I want to return an object?  I need curly braces for that!  The answer is, Yes you do!  So to do this you would wrap your object in parenthesis and the above will still work.

So you would write the following:

    
_.map(list, (item) => ({ id: item.id, name: item.name}));





There one major difference between anonymous functions and arrow functions is how this is defined in the functionArrow functions now have Lexical this

What does this mean?  The easiest way would be to show you the difference in these two examples:

Suppose I have the following object.
  
var solarSystem = {
        planets: ['Mercury', 'Venus', 'Earth', 'Mars', 'Jupiter', 'Saturn', 'Uranus', 'Neptune', 'Pluto'],
        star: 'Sun'
    };


Now I want to add some function to that object that does something to the array, and references some other property on the object.

    solarSystem.performOperation = function(){
        this.message = '';
        this.planets.forEach(function(p, i) {
            this.message += p + ' is planet number ' + (i + 1) + ' from the ' + this.star + '.  ';
        });
    };


What happens if you run this code? You will get the following:  

solarSystem.message === ''; //true

Why? This is because in the forEach function 'this' is defined as the window object. So the window object will have a message property on it that contains the following:  

window.message = 'undefinedMercury is planet number 1 from the undefined. Venus is planet number 2 from the undefined. Earth is planet number 3 from the undefined. Mars is planet number 4 from the undefined. Jupiter is planet number 5 from the undefined. Saturn is planet number 6 from the undefined. Uranus is planet number 7 from the undefined. Neptune is planet number 8 from the undefined. Pluto is planet number 9 from the undefined. ' 

 
However if you write the same code using arrow functions like in the fiddle below you will get the desired result.


Cheers!

Part 1 - Math & Numbers
Part 2 - New Parameter Features 

Tuesday, July 28, 2015

Learning ECMA Script 6: New Parameter Features

ES6 has a few new features when it comes to working with parameters.  To get these examples working, you most likely will have to use Firefox.

Pluto Hearts ES 6 Features

Default Parameters

Allowing for default parameters is a new feature being added into ES6, and are really simple to use.  When defining a function & specifying the parameters just add an equal sign and the value and you want to default it to and BAM! DONE!  So you would write your function like this: 
function defaultParam(foo = "bar"){};   
Now if the first parameter passed for foo is undefined then it will default to "bar".  However if it is null then it will be passed in as null!  Note that you can actually explicitly send in undefined it WILL use the default value.  You can also call functions to get the default value for a parameter, but be aware that the function will NOT be called unless undefined is passed in for that parameter.  

Rest Operator


The rest operator “…”is used when you define a function and works similarly to something like ‘varargs’ in Java or the ‘params’ in c#.  If placed before the last parameter in a function definition it turns it into an array and places all arguments that aren’t accounted for into the array.  You would write your function like this:
function restParam(foo, ...bar ){};  // bar is now an array



Spread Operator

The spread operator "..." is the same as the rest operator syntax wise, but used differently.  While the rest operator brings parameters together, the spread operator blasts arrays apart.  So the spread operator can be used to “spread” and array into individual arguments for a function call, or it can be used to “spread” an array out for use in another array definition.  It would be used like this:
function spreadParam(foo, bar){}; spreadParam(...["foo","bar"]);
One thing to note is that also works on strings as well.




Cheers!

Friday, July 10, 2015

Learning ECMA Script 6: Math and Number Features

At work we are currently preparing setting up our environment using tools such as babel to allow us to work with the newest JavaScript features without worrying about browser support.  So I am going to start taking a look at some of these new features and hopefully start incorporating them into my daily repertoire.

In this post I am going to take a look at the Number and Math features which are pretty easy, and mostly already implemented by modern browsers.


While Math can be scary, using these new features don't have to be.  We will start with the Number features and then move into the Math ones.

Numbers:

1. Support for octal and binary notation.  You can now specify numbers as octal (0o###), or binary (0b###);   So 8 (0o10) + 2 (0b10) + 10 + 16 (0x10) = 36



2. Support for octal and binary strings.  Since octal and binary formats are now supported you can pass those formats into the Number() function and it will convert it into an integer.  In the fiddle below the variables are converted to numbers using the Number function and we obtain the same result.


3. Number.EPSILON is a new constant that represents the difference between the smallest possible number greater than 1 and 1.  So essentially the smallest possible number that can exist greater than zero.  It represents 2.2204460492503130808472633361816E-16.  From what I can tell it is most useful in testing equality of decimals.  So instead of doing .3 - .2 - .1 === 0 (false) you can do Math.abs(.3 - .2 - .1) <= Number.EPSILON


4. Number.isInteger is a new function that lets you know if a number being passed in is an integer.  If it is not an integer or not a number then it will return false.


5. Number.isSafeInteger is similar to the function above, but also checks to see if the number is "safe".  Due to the way that numbers are represented in JS, there are max and minimum numbers that are "safe", and any numbers beyond that are potentially off.  What do I mean by "off"?  Essentially two or more integers beyond the max or minimum are represented by the same JavaScript integer.  Check out the third line in the fiddle below for wtf I am talking about.


6. Math.sign will return -1, 1, 0 (or -0....), or NaN based on the value passed in. 

7. Math.trunc will turn the number into an integer which will essentially round down if greater than 0 or round up if less than 0.



8. There are new trig methods such as Math.sinh, Math.cosh, Math.tanh, Math.asinh, Math.acosh, Math.atanh.  There is also a new Math.hypot function that calculates the square root of the sum of the objects. 

9.  There are also some new exponential and root functions available.  Math.cbrt returns the cube root of the number passed in.  Math.expm1 and Math.logp1 are similar to the Math.exp and Math.log, but from what I can tell by reading online they are more accurate when it comes to numbers close to 1. 

I believe that covers most if not all of the major changes in ECMA Script 6.  If I forgot something let me know and I will make sure to update the post.

Cheers!

Thursday, June 25, 2015

Creating ES6 JSfiddles with Babel

One of the things I’ve wanted to start to start learning is what is new in ES6.  There are a lot of new features in ES6 that are pretty awesome.  There are some that can even be implemented today using other libraries or polyfills such as promises.  However, for other features it can be harder to implement.  

The good news is there are libraries out there that will compile your ES6 script into backwards compatible JS.  The one I am looking into is Babel  https://babeljs.io.  Babel would typically be set up in your development/build process like any other compiler, but this one would theoretically fade away and leave you with ES6 scripts when browsers catch up to it.  

So to get started playing around with ES6 scripts, Babel has provided a live editor here: https://babeljs.io/repl/ which is nice.  However that doesn’t let you save anything, or show it off in a blog, etc.  So I went to see if JS fiddle has any out of the box support for Babel, and found it doesn’t.  I knew that Babel had a client side compiler (which shouldn’t obviously be used for production), so I tried to find that listed somewhere in a CDN.  I couldn’t find it, so I decided to see if I can reference it via google drive.  After some time investigating and finding this site I found how I was able to do it http://www.komku.org/2013/08/how-to-host-javascript-or-css-files-on-google-drive.html.  You are only allowed to reference files with a .js extension so I had to use method 1, and I generated the following url https://www.googledrive.com/host/0B4QQSpfhQZN5bXdTVTI0VFpjZW8/browser.js

    
Once you have it listed as an external resource on the JS Fiddle you can then create <script> blocks with the type of text/babel in your HTML area.  Once babel loads it will automatically compile those blocks and you are good to go!

Cheers!

Thursday, June 11, 2015

Project Euler

A couple years back I started on my first project Euler problem, and just a bit ago I finished my 50th problem.   If you aren't familiar with Project Euler, it is essentially a bunch of math problems that you solve programmatically, but you can read more about it on the site.

I didn't know until this blog post.

I really enjoy working on these challenges.  The problems typically require good problem solving, efficient algorithms, and good use of data structures.  So these types of problems are great for learning a new language, practicing an old one, preparing for an interview, or just honing your coding skills.  I have become fairly decent in Ruby by working on these (and a few other projects) to the point where I can pick it up after a few months of not using it and not miss a beat.

I might switch up languages soon.  I have always thought about trying out a true functional language, just for fun.  Regardless,  I plan on solving problems every so often to keep my skills sharp.  You can see all of my solutions (good and bad) on github here.   Also if you try it out and like it, you can add friends on project Euler to track each other's progress.  Add me with the following key 517763_1aef854359e6e315b4f1e358b8544e78 and I will add you back.

Cheers!




Friday, May 8, 2015

Using HTML5 Canvas to Create a Transparent Image.

I recently have been working on learning some threejs stuff and trying to understand 3d concepts in general.  One of my favorite things is looking at 3d renderings of planets and other cosmic things.  So I found this post that talks about how he created the planets.  He uses the textures from planetpixelemporium for his planets.  Beyond the simple textures there are textures that define the bump map for rocky planets, textures for specular which defines the reflectiveness for each area, textures for clouds, and even textures for rings.

While I was going through his blogpost and his code, trying to make sure I understood everything he was saying, I came across this:
We build canvasCloud and use it as texture. It is based on the jpg images you see above: one for the color and the other for the transparency. We do that because jpg doesn’t handle an alpha channel. So you need to make the code to build the texture based on those images. 

I was super confused, and I apparently wasn't the only one based on his comments.  So armed with just his code, and google I set out to see what the deal is with multiple jpegs and transparency.  I found all kinds of interesting things out there and why people would write code doing this instead of using a png or another format that supports transparency like was done for this android app, was done for a game in JS, and this interesting trick that XORs two images to get a transparent one.

So armed with reasons why I wanted to visualize what this actually looks like when we go through the data and merge the two jpegs into an image with alpha.   I originally wanted to do this all in jsfiddle like I normally put my examples, but unfortunately jsfiddle doesn't support hosted images, I didn't want to pay one of the other online sandboxes to host them, and you can't look at the image data in a canvas that is from a different domain.  So I put this on a website I have been playing around with for a bit, so you too can see it in action.  The source is on github here.

To get this to work we will use 3 canvases (these don't have to be in the DOM to work).  One for the source image, one for the alpha (transparency) image which is all greyscale, and one for the destination.  We then load the two images we want by creating Image elements for each of the images, setting the src to the appropriate image, and then handling the onload event of each image.  The code looks like this:

    function loadImages(sourceImageUrl, alphaImageUrl){
        sourceImg = new Image();
        sourceImg.addEventListener("load", function(){
            sourceCanvas.height = sourceImg.height;
            sourceCanvas.width = sourceImg.width;
            sourceContext.drawImage(sourceImg, 0,0);            
            alphaImg = new Image();
            alphaImg.addEventListener('load', function(){
                alphaCanvas.height = alphaImg.height;
                alphaCanvas.width = alphaImg.width;
                alphaContext.drawImage(alphaImg, 0, 0);                
                startOverlay();
            });
            alphaImg.src = alphaImageUrl;
        });
        sourceImg.src = sourceImageUrl;
    }


Once that is done and we have the images loaded on the two canvases we will need to loop through the image data and copy over the RGB components from the source canvas and compute the alpha based on the brightness of the greyscale with white (255) being completely transparent, and black (0) being completely opaque (you can make this up however you want realistically).  Here is the code for that:

function startOverlay(){
        destinationContext.clearRect(0,0, destinationCanvas.height, destinationCanvas.width);
        
        var sourceData = sourceContext.getImageData(0, 0, sourceCanvas.width, sourceCanvas.height),
            alphaData = alphaContext.getImageData(0,0, alphaCanvas.width, alphaCanvas.height),
            destinationData = destinationContext.getImageData(0,0, destinationCanvas.width, destinationCanvas.height);

        var mergeProperties = $scope.mergeProperties;
        mergeProperties.x = 0;
        mergeProperties.y = 0;
        mergeProperties.offset = 0;
        
        function mergeImages(){

            for(var i = 0; i < mergeProperties.speed; i++){

                destinationData.data[mergeProperties.offset+0]  = sourceData.data[mergeProperties.offset+0];
                destinationData.data[mergeProperties.offset+1]  = sourceData.data[mergeProperties.offset+1];
                destinationData.data[mergeProperties.offset+2]  = sourceData.data[mergeProperties.offset+2];
                destinationData.data[mergeProperties.offset+3]  = 255 -  alphaData.data[mergeProperties.offset+0];

                mergeProperties.x++;
                mergeProperties.offset+=4;
                if (mergeProperties.x >= sourceImg.width){
                    mergeProperties.x = 0;
                    mergeProperties.y++;
                }
                if (mergeProperties.y >= sourceImg.height){
                    destinationContext.putImageData(destinationData,0,0);    
                    return;
                }
            }

            destinationContext.putImageData(destinationData,0,0);    
            $timeout(mergeImages, 10);

        }
        
        mergeImages();
    }

The Timeout code is there just to slow it down for my demo. For actual production you would not have the timeout and the speed. You would just loop until you hit the imageHeight.

So take a look at it, it is pretty cool visualization.
Cheers!

Tuesday, April 7, 2015

Angular JS with MVC templates

I was recently working on an ASP.MVC project and I was thinking about using angular on the project.  There wasn't going to be a ton of dynamic things that required angular, but I still thought it would be a good fit for the project.  I knew that the typical way to use angular + ASP.MVC is to pretty much use ASP.MVC as a rest service (WebAPI), but I wanted a hybrid approach where I could use the Razor templating in places that had static data and angular templating in places I wanted it to be more dynamic.

With your powers combined...

The solution that we used allowed us to do both which is what I wanted.  We would use MVC templates instead of just html ones.  Angular would reference those templates in the routing and it could contain the server side information when it came back with the template.


So to get started I created a basic MVC project and opened up NuGet (.net package manager) to remove the jQuery packages and add AngularJS Route package.

Next I modified App_Start/BundleConfig.cs to bundle the new files.  I removeed the jQuery ones and added the angular.js ones.

 
  public static void RegisterBundles(BundleCollection bundles){

      // angular.js libraries
      bundles.Add(new ScriptBundle("~/bundles/angular").Include(
            "~/Scripts/angular.js",
            "~/Scripts/angular-route.js"
            ));
      //Application code
      bundles.Add(new ScriptBundle("~/bundles/app").Include(
            "~/Scripts/app.js",
            "~/Scripts/controllers/*.js",
            "~/Scripts/services/*.js"
             ));

      // Use the development version of Modernizr to develop with and learn from. Then, when you're
      // ready for production, use the build tool at http://modernizr.com to pick only the tests you need.
      bundles.Add(new ScriptBundle("~/bundles/modernizr").Include(
                    "~/Scripts/modernizr-*"));

      bundles.Add(new StyleBundle("~/Content/css").Include("~/Content/site.css"));
}

Next let's update the Views/Shared/_Layout.cshtml to use the new bundles.  We will also add the ng-app  and ng-view directives here as well.

<!DOCTYPE html>
<html>
<head>
    <meta charset="utf-8" />
    <meta name="viewport" content="width=device-width" />
    <title>@ViewBag.Title</title>
    @Styles.Render("~/Content/css")
    @Scripts.Render("~/bundles/modernizr")
</head>
<body ng-app="mvcApp">
    @RenderBody()

    <div id="content" ng-view></div>
    
    @Scripts.Render("~/bundles/angular")
    @Scripts.Render("~/bundles/app")
</body>
</html>

For this app I am going to have a home page with 3 links to a view and passing in the id in the following format '<controller>/<action>/<id>'. 

Notice in my page I have the ng-view directive, but I also have @RenderBody() which MVC uses to render the pages. What I am going to do is create a default page that just renders as an empty page.  This is going to be hit on the initial load, but that is it.  After that the MVC views will be used as angular templates.

So let's first add the mvc pages & services.

For my first view and control I will name it Home just to keep things simple.  I am going to implement an Index function and just return the view.

public class HomeController : Controller
{
    public ActionResult Index()
    {
        return View();
    }
}

And here is the corresponding view:

@{
    ViewBag.Title = "View1";
    Layout = null;
}

<h2>Home View</h2>

<p>{{hello}}</p>

<nav>
    <ul>
        <li><a href="#dynamic/1">Dynamic Page with Id 1</a></li>
        <li><a href="#dynamic/2">Dynamic Page with Id 2</a></li>
        <li><a href="#dynamic/3">Dynamic Page with Id 3</a></li>
    </ul>
</nav>

For all my links I will be going through the angular routing.  Likewise I can use angular in my views. Here is the dynamic view which is a bit more interesting.

The dynamic view accepts an id as a parameter and returns data based on it.  First we create the controller which will return  an object with the controller.  Then we will set up the view to use both data retrieved from an angular service call and the MVC template.  Finally we will create a service for our angular code to call.

The controller:
public class DynamicObject
{
   public string Name { get; set; }
}

public class DynamicController : Controller
{
    public ActionResult Index(string id)
    {
        return View(new DynamicObject { Name = "Dynamic Object Name " + id });
    }
}

The view:
@model MVCApp.Controllers.DynamicObject

@{
    Layout = null;
}

<h2>Dynamic View</h2>

<h3>
    @Model.Name
</h3>

<p>{{hello}}</p>

<ul>
    <li ng-repeat="item in dynamicList">{{item}}</li>
</ul>

The Service:
public class DynamicController : ApiController
{
   private Dictionary<string, string[]> Data;
   public DynamicController()
   {

       Data = new Dictionary<string, string[]>();
       Data.Add("1", new string[] { "Mercury", "Venus", "Earth" });
       Data.Add("2", new string[] { "Mars", "Jupiter", "Saturn" });
       Data.Add("3", new string[] { "Uranus", "Neptune", "Pluto" });
    }

    [AcceptVerbs("GET")]
    public string[] Index(string id)
    {
        return Data[id];
    }

}

Notice in the view we are using the value from the model Name and using ng-repeat to iterate through all of the items in the dynamicList on our angular $Scope object.

Now that we have our server code all set up (you should be able to actually hit each of these endpoints/views).  We can get started on the angular side.

First we will create our app.js file in the Scripts folder to define our app and routing.

'use strict';
angular
  .module('mvcApp', ['ngRoute']).config(['$routeProvider',  function ($routeProvider) {

      $routeProvider.when('/', {
          templateUrl: 'home',
          controller:'mainCtrl'
      })
      .when('/dynamic/:id', {
          templateUrl: function(params){
              return 'dynamic/index/' + params.id;
          },
          controller: 'dynamicCtrl'
      });
}]);


One interesting thing to note here is that instead of returning a url for the dynamic page, I return a function which is then used to get the URL for the view.  I do this so I can pass the Id from the angular.js routing to the MVC routing.  An alternative could be query string passing, but I wanted to stick with the nicer looking urls.

Next we will define our two controllers.  One for the dynamic view and one for the home view,  The home view one is pretty simple, so let's just skip to the dynamic view one.

angular.module('mvcApp').controller('dynamicCtrl', ['$scope',
'$routeParams', 'dynamicService', 

function ($scope, $routeParams, dynamicService) {
    $scope.hello = "Hello World from dynamic.js";
    dynamicService.getDynamicList($routeParams.id).then(function (data) {
        $scope.dynamicList = data.data;
    });
}]);

So in this controller we are setting data directly on the scope property hello and putting data from a service call onto the dynamicList property of the scope.  We use the id from the $routeParams to pass onto the service.

So let's define the dynamicService.js file that will call out to the MVC.WebApi Conroller that we created earlier.

angular.module('mvcApp').factory('dynamicService',['$http',  function ($http) {
    function getDynamicList(id) {
        return $http.get('api/dynamic/' + id);
    }

    return {
        getDynamicList: getDynamicList
    };

}]);


Now we have a service that calls the rest endpoint that we created earlier.  So now if we run the app we should get what we were looking for.  MVC views with an angular.js front-end!  The source code is on github here if you want to check it out and run it yourself or use it as a reference.

Cheers!

Dynamic Page 1

Dynamic Page 2

Dynamic Page 3



Friday, March 13, 2015

Developing without the backend: Working with 3rd Party Scripts

In my last post Developing without the backend. Simple Web Socket development., I talked about how to work offline with a simple Web Socket server so you can write your front-end in peace.  That is cool, but what if I have to deal with a 3rd party JS library that requires access to the Internet, login info to their site, actually running on their site, etc.  Essentially anything that requires you to run outside of your development environment.

So how do you deal with these 3rd party JavaScript API's using angular and work without their backend?  What if you need the cloud and you are offline?  What do you do?
Not that kind of cloud.  
Angular has the built in capability to use this pattern when necessary to add, remove, or completely replace the functionality of a provider by using the $provide.decorator function.

Angular has 2 major phases.  A run phase where it is actually executing the code, and a config phase where it wires up your application.  The decorator function comes in handy during the config phase that runs before the run phase.  In this phase you can access the $provider service, as well as built-in providers for your services, factories, etc.  So first you want to create a service wrapper for the 3rd party library that you are using.  Whether or not you just return the 3rd party object or create wrappers for all the functions, I'll leave up to you, but this will work for either method.  Next, you can do is create a config for your module (don't worry you can have multiple config calls) that uses the $provide.decorator function to overwrite or completely replace the service we just created.  For more information on these take a look at these posts on angular dependency injection and extending the $q method.  They are excellent reads.

I created a sample on jsFiddle to show you how it will work.  In this example I will use the underscore library in my application and then use the function above to overwrite one of the functions.  Outside of this tutorial, the next step would be to update your grunt (or whatever build) tasks to remove the mock config file from your project during the build process so you would actually use the real library.

To start I will create a simple service to wrap the underscore library, so I can write better unit tests and develop disconnected.

factory('underscoreService', function(){
    return _;
})

No magic there.  Next I will refer to that service in a controller object and call a couple of methods on it.

controller('ctrl' ,['$scope', 'underscoreService', function ($scope, underscoreService){
    $scope.list = ['Mercury', 'Venus', 'Earth', 'Mars', 
                   'Jupiter', 'Saturn', 'Uranus', 
                   'Neptune', 'Pluto'];
    
    $scope.sampleList = underscoreService.sample($scope.list, 3);   
    $scope.shuffleList = underscoreService.shuffle($scope.list);

}])

No magic here.  I have a list that creates 2 additional lists with method calls from underscore.  Now I want to swap out the logic from the service before it ever gets used.  This is where the $provider.decorator comes in. I will overwrite the logic for shuffle with a function that just returns the array passed in.

config(['$provide', function($provide){
    // override the function. you can uncomment it if you just want it to shuffle
    $provide.decorator('underscoreService', function($delegate){
        $delegate.shuffle = function(list){return list};
        return $delegate;
    });
}]

In the decorator function you receive a parameter called "$delegate" which is the return value from the service, factory, etc, before it gets injected anywhere.  So you can either overwrite the methods there, or you could return your own object instead!!!1

Here is the end result:


Just remember to update your build process to remove the config file!
Cheers!!

Monday, March 9, 2015

Developing without the backend. Simple Web Socket development.


While working at Motorola, the JavaScript code I wrote did a lot of communication with embedded devices via Web Sockets.  Because the work by the embedded side was done in parallel to my work, I typically didn’t have access to the latest version, let alone a physical device.  So I needed to be able to test, demo, and seamlessly integrate my work when we were ready.  One of the major lessons I took from this is that, heavy front-end development should be agnostic to the back end.

This is where I love working with a tool like Node.js.  You can use and develop using simple tools without having a backend/database connection and test all of your scenarios, and give a good demo. Likewise you can mock out your data so that it tests all of your edge cases and fail conditions easily.  The best part is that it is self contained, and can be passed from developer to developer.

So what I want to show is how to create a super simple Node.js web socket test server that is then easily portable to the actual production back-end.  I started my project using the yeoman Angular generator, however it isn’t necessary to use either for what I am going to show.  What is necessary is Node.js, grunt, grunt-contrib-connect (version 0.8 at least), and a web socket node package (I used ws).   


If you just want to get the code and work with it then you can download it here.

1. Setup

Starting with the angular-generator, and making sure that I added the 2 packages that referenced above, I modified my project structure just a bit by adding a factories folder in my app/scripts directory to separate simple web socket factory my data access services, and a server folder in my root project directory.  In this folder we are going to put all of our test server code.  

2. Front-end development

In the factory folder I created a simple angular factory module that pretty much just returns a Web Socket object.  The reason I do this mostly has to do with Unit testing.  It creates one simple place to grab a web socket instance whenever it is necessary, but most importantly it creates an easy place to use jasmine spies to mock the WebSocket object that comes back in any unit tests that I write. 

From there I create a service that will get an instance of the web socket and basically abstracts the communication to and from the WebSocket from the rest of the app.  So to the app I expose a way to connect, disconnect, and individual methods for each method type I want to send.
angular.module('blogApp')
    .factory('socketService', ['socketFactory', '$rootScope', function (socketFactory, $rootScope) {

    var _socket = null,
        _transactionCounter = 0,
        _messageHandlers = {},
        Types = {
            Connect: "CONNECT",
            Disconnect: "DISCONNECT",
            SetServerTimer: "SET_TIMER",
            BroadcastMessage: "BROADCAST_MESSAGE",
            AsyncTimer: "ASYNC_TIMER",
            GenericResponse: "GENERIC_RESPONSE"
        };

    function isConnected(){
      return _socket !== null && _socket.readyState === WebSocket.OPEN;
    }

    function disconnect(callback){
        if (isConnected()){
            _socket.close();
            if (callback){
                callback();
            }
        }
    }

    function setServerTimer(timeout, times, callback){
        sendMessage({type: Types.SetServerTimer, data: {timeout: timeout, times: times} });
    }

    function sendBroadcastMessage(user, message){
        sendMessage({type: Types.BroadcastMessage, data: {user: user, message: message}});
    }

    function connect(callback){
        if (!isConnected()){
            _socket = socketFactory.createSocket();

            _socket.onopen = function onOpen(){
                if (isConnected()){
                    sendMessage({type: Types.Connect, data: "Hello World"}, callback);
                }
            };

            _socket.onerror = function onError(){
                console.log("Error");
            };

            _socket.onmessage = onMessageRecieved;

            _socket.onclose = function onClose(){
                //goodnight socket
                _socket = null;
                $rootScope.$broadcast(Types.Disconnect);
            };
        }
    }

    function sendMessage(message, callback){
        if (!isConnected()){
            return false;
        }

        message.transactionId = _transactionCounter++;
        if (callback != null){
            _messageHandlers[message.transactionId] = callback;
        }

        _socket.send(angular.toJson(message));

        return message.transactionId;
    }

    function onMessageRecieved(evt){
        var message =  angular.fromJson(evt.data);

        $rootScope.$apply(function(){
            if (message.transactionId != null && _messageHandlers[message.transactionId] != null){
                _messageHandlers[message.transactionId](message);
                _messageHandlers[message.transactionId] = null;
            }

            $rootScope.$broadcast(message.type, message);
        });
        
    }

    // Public API here
    return {
      isConnected: isConnected,
      disconnect: disconnect,
      connect: connect,
      setServerTimer: setServerTimer,
      sendBroadcastMessage: sendBroadcastMessage,
      Types: Types
    };
}]);

Each message to the service is stringified and parsed as JSON, but you can use pretty much any format.  We used protobuf at Motorola which has an open source implementation.  Each message also has a transaction Id and the ability to register a callback from it.  So if you have a traditional send & receive message this will work for you.  The server will send the appropriate response as necessary with the same transaction ID so it will know what callback to use.  


You may notice that it also has a reference to the $rootScope.  The reason why is when data comes back for the response above it is wrapped in an $apply call.  The $apply call notifies angular to check the watches and to update the UI as appropriate.  Most NG-events or methods such as ng-click or $http do this for you already, but there isn’t anything for web sockets.

Also we want to have a way that any $scope in the system can listen for a specific message in case there is an asynchronous message from the server or some event that is initiated from the server (such as a message from a different logged on user).  To do this I an utilizing the angular $broadcast method that will notify any listeners for the new data and let them handle it as they need to.  

Finally I am going to add a simple controller that has a reference to this service and some buttons to perform the appropriate actions (I won't paste that all here).  

 3. Test Server

So now that I have this front-end code I need something to actually hit.  I recommend writing unit tests for most of your code, and the socketFactory will help you with that, but it doesn’t help you see an interactive version of the code.  To get that we will need an actual server.


In my server folder that I created earlier I am now creating a socketServer.js class.  I want to handle the connection/disconnecting from the socket and mock out the data being sent and any responses necessary. 

Using ws I will create a socket server on the port I specified earlier. So I create message handlers for connection, close and message events.  
var ws = require('ws');
var SOCKET_PORT= 55800,
    Types = {
        Connect: "CONNECT",
        Disconnect: "DISCONNECT",
        SetServerTimer: "SET_TIMER",
        BroadcastMessage: "BROADCAST_MESSAGE",
        AsyncTimer: "ASYNC_TIMER",
        GenericResponse: "GENERIC_RESPONSE"
    };

var sockets = [];

exports.startServer = function(server, connect, options){

    // open a websocket with said server. 
    var commandSocket = new ws.Server({port:SOCKET_PORT});

    commandSocket.on("connection", function(socket){

        var _socket = socket;
        sockets.push(socket);
        console.log("Socket Connected");

        _socket.on("close", function(){
            console.log("Socket Closed");

            //remove from the list of sockets
            for(var i = 0; i < sockets.length; i++){
                if (_socket == sockets[i]){
                    sockets.splice(i, 1);
                    break;
                }
            }

            _socket = null;
        });

        _socket.on("message", function(data, flags){
            var message = JSON.parse(data);
            console.log("Socket message recieved: " + message.type);

            switch(message.type){
                case Types.Connect: 
                    handleConnectMessage(message);
                    break;
                case Types.SetServerTimer:
                    handleSetServerTimer(message);
                    break;
                case Types.BroadcastMessage:
                    handleBroadcastMessage(message);
                    break;
            }

        });
    });
});

In the connection event I will receive an instance of the socket object, so I will hold onto it and use closure to allow everything else have access to it.  Likewise I will add it to a running list of open connections that I have going so I can send any messages of a certain type to all active sockets.  


In the close event I will set the socket to null, and remove it from the active socket array.  Nothing crazy here.


The message event is where we will do the bulk of the work.  You can make this part as complicated or simple as possible.  I recommend that you keep it really simple for a few reasons.  First, you typically don’t really care about the data being sent back or any actual saving of the data.  Your front end typically cares that the message got sent correctly, and any responses that happen, do happen.  Second, you want to be able to change it easily.  The more you add to it the more complicated it will get.  You should be able to change the parameters or the message sequences easily, or it defeats the purpose of a quick and dirty server.  Finally, your end product is most likely just the front end.  Since that is the case you don’t want to spend more time than you need to, when you just want to test the scenarios or demo it to your client or PM.  

Finally we need to actually call our mock server.  That is where the specific version of gunt-contrib-connect comes in.  In the options for the server there is a callback function you can hook into when the server starts called onCreateServer.  That callback will pass you the server among with other information so you can hook into it.  In our case we just want to call our socketServer.js class and pass in the server.

      connect: {
      options: {
        port: 9000,
        // Change this to '0.0.0.0' to access the server from outside.
        hostname: 'localhost',
        livereload: 35729,
        onCreateServer: function(server, connect, options){
            var socketServer = require('./server/socketServer.js');
            
            socketServer.startServer(server, connect, options);
        }
      },

Before I ended this post, I wanted to just reiterate that the point of this is not to recreate all of the logic of the server, but to make one simple enough that it is easy to change and just big enough to work.

The code is available on Github. Feel free to download it and start playing. Just npm install, bower install, and grunt server and you are good to go.

Monday, January 19, 2015

The Power of Apply and Call

A lot of people that use JavaScript don't understand (or even want to understand) the apply and call functions that are built into every function.

Or apply me maybe?
In combination with being a bit confusing, a lot of times you can get around them by just adding an additional parameter or some extra code here and there, and so you just ignore them.

I just ran across a situation that shows a great use case for apply and hopefully will help someone understand how to use it.

The scenario is I have two arrays.  One array of N points starting from 0 and one array of M points starting at an some positive integer X.  When my JS code handles array M, I want to insert and overwrite positions X...M.length + X of array N.  The brute force way would just be to loop through them all copying them as they go.  I thought about it for a bit and gave myself a quick reminder of what functions exist for a JS array.

I thought about concat, but that only appends to the end.  I thought about splice, but that only handles a list of parameters.  Then I remembered that JS already has that covered with apply.  Using apply will solve my issue of having an array instead of a list of parameters.  The code for this is pretty simple.  Given 2 arrays, and a position it will be this:
Array.prototype.splice.apply(array1, 
         [position, array2.length].concat(array2));

You might notice the [position, array2.length].concat code in there and wonder what that is.  Well the first 2 parameters to splice are the position and the # of elements to move.  The remaining parameters are the values to insert.  So using concat to add the two together will make the apply call work.

Here is a live example:



Friday, January 2, 2015

Using jQuery Plug-ins With Angular (jScrollPane)

I needed to allow for a particular part of a page I was working on to scroll independently of the page.  Typically you would just use overflow-y: auto and be call it a day.  However the default renderings of these scroll bars stick out like sore thumbs in the middle of a page.



So I quickly googled to see if there were any awesome angular.js directives that allowed you to style scrollbars.  Much like the flyout, confirm dialog, and dropdown, I didn't find much online.  I did find a bunch of posts on jQuery controls that have nice scrolling like jScrollPane.  I figured that instead of creating one from scratch this time, I was more curious on the proper way of integrating jQuery plugins with angular.js.

I found this video that shows an integration with the chosen jQuery plugin, which incidentally was something I was looking at when I decided to create my own drop down, and I decided to watch it.  The video is pretty good, but regardless if you watch it or not I'll show you how to integrate a jQuery plugin with angular.js.

So after viewing that example I figured it would be pretty easy.  It should be just adding the JS files and creating a directive that applies the plugin to the element in the link function.  It pretty much boils down to the following directive.
 
.directive('NAME', function(){
    
    function link(scope, element){
        // you can use the angular.element instead
        $(element).PLUGINFUNCTION();

    }    
    
    return {
        link:link,
        restrict : "A"
    };
});

Just replace the 'NAME' with a proper name for the directive, and 'PLUGINFUNCTION' with the actual function to execute the plugin.  I have it restricted as "A" so you can just add the attribute to whatever element you want it to be on.  So to use the jScrollPane plugin you can just use this.
 
.directive('scrollPane', function(){
    
    function link(scope, element){
        // you can use the angular.element instead
        $(element).jScrollPane();

    }    
    
    return {
        link:link,
        restrict : "A"
    };
});

I tried it out in a fiddle real quick and it worked fine.  However when I tried to use it in my application it wasn't working.  After thinking about it for a bit, and reading some additional  documentation for the jScrollPane, I realized the issue was, like most JS heavy applications, were being driven by data from services, or in my case being asynchronously loaded via websockets.

So now I found the function to reinitialize the plugin, but I needed to figure out how to inform the directive to do it.  I ended up deciding to use a built in angular functionality to do so.  The scope.$broadcast and scope.$on  seemed to be exactly what I was looking for.  Whenever what was driving the content changed it would send a message down scope using $broadcast, and $on would listen for the message.  That made me feel good about the separation of concerns.  Essentially the scrollPane directive could just listen for an event from somewhere up the parent scope and whatever was driving the data or re-sizing of the page could drive the sending of the message.  I ended up just using a generic "refresh" message name for it.  Here is the end result for the directive:

 
.directive('scrollPane', function(){
    function link(scope, element, attr){
        
        var $element = $(element),
            api;
        
        // element.jScrollPane();
        //In real world Angular would replace jQuery
        $element.jScrollPane();
        api = $element.data('jsp');
        
        scope.$on('refresh', function onRefresh(event, args){
            api.reinitialise();
        });
        
    }
    
    return {
        restrict: 'A',
        link: link
    };
});

Here is the end result: