Steven Edouard

Developer Advocate. Tech Enthusiast. Life Enthusiast.

NAVIGATION - SEARCH

Azure Mobile & MongoDB? : It's like Peanut Butter & Jelly!

Azure Mobile Services Git Deploy is a new feature of Mobile Services where you can start writing your scripts in the text editor of your choice and deploy by pushing your local repo to Azure.

 

Although this is cool, the really cool thing about this is that unlike before you can add ANY Node.js npm package to your repo. And you know what's really, really, cool? You can use MongoDB as an alternate to Azure Mobile SQL tables!

 

 

So let's get started:

 

1) Sign into your Azure Portal add a new mobile service:

 

 

2) Although we'll be using Mongo, every Mobile Service has an associated SQL database. So just use an existing one or create a new one for free.

 

 

3) After your mobile service is done being created, head over to the API tab. We need to make a custom API for clients to talk to our MongoDB:

 

 

Let's call this API 'todoitem'. For the purpose of this demo, we'll use 'Anybody with the Application Key' security permission. For actual mobile clients I would recommend using 'Only Authenticated Users' to keep your database secure.

 

     We will implement the GET and POST API's

 

 

4) Before mucking with our scripts, lets setup source control so we can get the right stuff needed to connect to Mongo. Head back to the Mobile Service Dashboard:

    

 

 

    Notice the 'Setup source control' on the dashboard. Go ahead and click it.

5) The Portal will automagically take you to the Configure tab, but we gotta set up our source control credentials. Azure generates a default login but I found it to be confusing as to what it actually is.

    Go back to the Dashboard tab Click 'Reset your source control credentials'. Enter a user name and password for your Git account. (Note, you can't use your Micrsofot Account credentials)

 

 

 

 

6) Hop back to your Configure tab and grab the Git URL:

 

 

7) Now to clone the repo! Use your favorite git client and do the following commands in your shell:

 

>git clone <your git repo link>
>Username for '<your git repo link>': <your git username from step 5>
>Password for '<your git repo link>': <your git password from step 5>


8) Navigate to <your service name>/service folder in your local repo. You should see a folder layout as follows:

 

As you can see, azure setup a nicely organized repo for you with your new 'todoitem' api, and a JSON config file for it.

For this demo, we'll use the mongoose Node.js driver. Install Mongoose by navigating to <your service name>/service and running the following command:

>npm install mongoose


If all goes well, your output should end with the package installation similar to this:

 

 

9) As of writing this post, you can't run azure mobile services locally on the emulator, so adding the npm package actually doesn't really do much but mirror what your setup looks like in the cloud. The important thing to do here is open package.json. Ensure that you add mongoose as one of your dependencies:

 

(I found that npm didn't do this automatically in my azure repo)

 

10) Now that we've got our database driver, how the exactly do we get our Mongo database? Luckily to make things easy we have Azure add-ons where we can find MongoLab. (Obviously you don't have to use MongoLab's MongDB, but for this demo that’s what I'll use)

To add MongoLab jump back to your portal, click the plus on the bottom left and select 'Store':

 

 

Pick a sandbox account (which is free) and give it a name. (Make sure to use the same region as your mobile service for best performance!):

 

 

Complete your 'Purchase' and you'll see that you have a MongoLab mongodb in your Add-on's section:

 

 

11) Select Manage which will open up your MongoLab Portal. Select 'Add Collection':

 

 

Call it todoitems:

 

 

Great! Now we've got a place to stash our todoitems for our app!

12) Head back to the portal on and copy the connection string from your Add-on's page by clicking 'Connection info':

 

 

13) Jump over to your Mobile service Configure page, and scroll toward the bottom to app settings. This is a place we can privately put sensitive info like the connection string we copied from MongoLab without placing it directly in our code. Add the setting 'MongoConnectionString' to your app settings and copy the connection string from the previous step as the value.

 

 

 

 

This makes it easy to share your code on github or with other collaborators. Also, if you prefer to use a configuration file, check out my post on using JSON config files instead.

 

14) Now, let's write the POST /api/todoitem API. Go to the API you created in Step 3 and insert the following:

 

 

The scope of this post isn't to necessarily show you how to use mongoose, but do note that you should not compile your mongoose model within the api handler but in the global scope so that it is only compiled once. The model defined by todoItemSchema will associate Todo items and Users. We use the process.env object to get to the connection string we placed in Step 13 to open the connection to mongo and place the JSON object directly in the database. I should caution that it may be a good idea to validate the JSON you recieive for defensive purposes.

 

After your done writing the API implementation go ahead and commit to your repo:

>git commit .
>git push origin master



Now here's the cool part! This will trigger the installation of the npm packages in the remote repo and the deployment to the mobile service. You should see your git console output the installation of mongoose and its dependencies and that the deployment was successful:

 

 

15) Now let's test our POST api! I'll use the Advanced Rest Client chrome extension since its super handy in debugging REST API's. You can do the following request:

POST https://<yourservicename>.azure-mobile.net/todoitem

HEADERS:

X-ZUMO-APPLICATION : <YOUR APPLICATION KEY>

BODY:

 

{"category":"MustDos", "description":"Make more apps!", "user": { "id":"Facebook:2432423", "name": "Steven Edouard" }}

 

Success! The API returned the mongo-generated id of the created object. We've written to our MongoDB from our Azure mobile app! Now, can we get the Mongo object back and party on that?

 

16) Get back to your text editor and add the implementation for the GET api:

 

 

 

 This script returns us the TodoItem whose id matched the itemId url query argument.

 Let's test it out! Do the following call:

 GET https://<yourservicename>.azure-mobile.net/todoitem?itemId= <itemId returned from step 15>

 HEADERS: X-ZUMO-APPLICATION: <YOUR APP KEY>

 

Huzzah! We got our mongo object back!

 

Now you can successfully integrate with mongo in our clients using authenticated apis with MongoDB. Why is this so awesome? As a developer now YOU have more choices in how you can store your data with the ease of using azure mobile services. Depending on your application, scale and costs this gives you a great alternative to Mobile Services SQL Tables.

 

You can find the finished code for this service at: https://github.com/sedouard/AzureMongo

 

Happy Coding!

 

 

 

Using JSON Configuration files for Azure Mobile Services

If you log into your Azure Mobile Services (AMS) portal, you'll notice that you can now deploy your service via Git. This is an awesome feature that makes life way easier. Scott Guthrie has a really good post on it.

 

I'm used to building full fledged cloud services (with worker roles, web roles etc). With AMS, you get a bit more platform, and not as much knowledge about the infrastructure compared to other PaaS services like web and worker roles. Which is great but really became a pain when I was trying to use configuration files with my mobile services.

Most people like to keep their settings like access keys, connection strings, account settings etc. in one place. And if you're using Node.js for your mobile services (which I'm really becoming a fan of by the way!) you probably want to have some sort of JSON configuration like this:

 

{"MongodbConnectionString":"mongodb://MongoLab-4q:X7TH5Lab-4q",
 "Storage.AccountKey":"fy6fTMAFrAPNH",
 "Storage.AccountName":"mystorageaccount",
 "Storage.PhotoContainerName":"maincontainer"}



Personally I like to use nconf to read my configuration file because its really straight forward. The problem is that azure mobile doesn't provide you with very much information with the environment in which your code is running. That's the point of PaaS... isn't it? :-)

Well anyways if you follow the how-to on the nconf npm page you'll run into a problem. Say that you have a custom api like so:

nconf.argv().env().file({ file: '../shared/config.jsn' });
exports.get = function(request, response) {
    // Use "request.service" to access features of your mobile service, e.g.:
    //   var tables = request.service.tables;
    //   var push = request.service.push;
    var accountName = nconf.get('Storage.AccountName');
    console.log('Connecting to blob service account: ' + accountName);

}


And that you place your config.jsn in your shared folder under <ProjectRoot>/shared/config.jsn you'll be surprised when you look at your log output:

'Connecting to blob service account: undefined'

:-O!!

So what's going on here? It turns out (after a good amount of testing what worked and what did not) that there's something funky going on with the current executing directory. So instead, if you want to get to the configuration file, use __dirname + '/path/to/configfile.jsn'

Note: As unintuitive as it sounds, don't name your config file with the extension '.json' or azure will confuse that with a route configuration. It will break your apis with mysterious 500 errors.

So the working verison of the example above (assuming you placed your configuration file in the shared folder) is:

nconf.argv().env().file({ file: __dirname + '/../shared/config.jsn' });
exports.get = function(request, response) {
    // Use "request.service" to access features of your mobile service, e.g.:
    //   var tables = request.service.tables;
    //   var push = request.service.push;
    var accountName = nconf.get('Storage.AccountName');
    console.log('Connecting to blob service account: ' + accountName);

}




After this everything seems to be in working just fine. So to sum it up, to use json config files in your azure mobile service:

1) DON'T name the config file with the extension .json (it'll make you get unexplained 500's on calls to your apis')
2) DO use the __dirname special node.js property to make the file path absolute at run-time.


Hope this saves you some time!



'Til next time!

3 Simple things to save big on your Azure bill

If you're like myself, my first naïve attempt at a small indy cloud app was burning money faster than a bachelor party in Vegas. Here are a few tips that can save you big on your cloud apps:

 

  • If you already have a redundant data solution store your data on-premesis

You may already be at a small company with a data backup solution with associated infrastructure and such. One thing you may notice is that cloud data isn't super cheap. 1TB of locally redundant data may cost you upwards of $70/mo and even more for geo-redundant. If you already have your own storage infrastructure in place it may make more sense for you to use Azure cloud storage as sort of a temporary buffer, not a final storage place and then pull the data down on-premises.

 

Otherwise just be very mindful about what you store in the cloud and understand if you need to be able to access that data readily, from anywhere.

 

  • Pay close attention to your VM sizes for Worker/Web Roles

The most expensive thing in the cloud are compute hours. When you create a cloud service in Visual Studio, adding a worker/web role will default to a Small VM size. What it doesn't tell you is that this Small VM will set you back about $70 bucks a month.

 

 

Try out an Extra Small VM for size see if that fits your services needs and move up as appropriate.

 

Did you know, that a Small VM (2ghz, 1.75gb memory) cost about $70/mo vs an Extra Small at about $30/mo?

 

  • Combine worker roles into a single role instance

 

You shouldn't think of Azure worker roles as a single task in your app. Remember, each role => 1 VM and VMs aren't cheap! By default your starting code for a Worker role is really setup to do only one task forever:

 

 

 

Don't do this!

 

Having a single task that your worker leads to either a monstrously large and unorganized task or a worker for each simple task.

 

Instead, do this:

 

 

Sub-divide the worker role instance. Create an interface, call it IWorkerRole with a few interfaces to start, stop and run a subworker.

 

 

Now, implement this interface for however many (sub)workers you want like below:

 

 

In my example I have ProcessingWorker1, ProcessingWorker2 and ProcessingWorker3. You can imagine these things can be doing things such as caching, listening to a service bus or cleaning up a database.

 

*Note: Be sure to catch all exceptions (and report them to an error log trace) at the base of every OnRun() on the processing worker. If you don't do this, any one processing worker will take down the rest, putting a halt to any work until the entire instance restarts.

 

 Get the full template starter project here: https://github.com/sedouard/MultiWorkerRole

Using Services Buses with Cloud Storage to Transfer Files Anywhere

One thing I notice is the lack of awareness of cloud tools that can save your team a lot of time in infrastructure development for very little $$$.

There's all this hype of the cloud and how its the way of the future and this is true. But the cloud isn't a binary thing, you can use parts of it, all of it or just for certain applications.

Recently I had a small problem that I needed to solve. I needed to get data from customer and partner servers living a data center. I needed those files from the servers to get to our sever for diagnostics analysis. The data also needed to be sent securely.

Problem is, there was no (really) easy way to get their files from their server to mine. A few options were considered from using an FTP server (which is horribly unsecure) to SkyDrive and similar web apis. These options although feasible were clunky and lack some simplicity that I wanted.

Then I tried using Azure Blob Storage and Service bus. Azure blob storage is a place you can store any binary file data you want and the service bus allows you to send tiny messages between any computers connected to the internet (usually some serializable info in xml or JSON). These services have excellent .NET client libraries via NuGet (and for other langugages, too!) which takes a lot of work off your hands.

Fast forward about 4 hours of work later I have a sample solution to share. It uses 2 console apps:

 

Uploader.exe (of which there may be N instances running around the world.)

 

Downloader.exe. (In my situation I have just 1 running on my server)

 

I use these apps in batch automation to periodically send data files to my server.

 

https://github.com/sedouard/UploadAnywhere

Create your service bus queue and storage account. Once you do that you can use the console apps in this solution by changing the app configs:

-Downloader

  -App.config

and

-Uploader

  -App.config

<!-- Storage account connection to  service blob storage-->
    <add key="Uploader.Storage.ConnectionString" value="[YourStorageAccountConnectionString]" />
    <!--Service bus info to use to notify of file upload to downloader-->
    <add key="Uploader.ServiceBus.ConnectionString" value="[YourQueueConnectionString]" />
    <add key="Uploader.UploadQueueName" value="[queuename]" />

After changing the configs to your queue names you can use the uploader as such:

Uploader.exe [continername] [localfilename] [cloudfilename]

Its important to note that your container name must not have punctuations or capital letters. Although the app will auto-lowercase the name. The cloud file name should be something like folder1/folder2/myfile.txt

What will happen is that Uploader will first upload the file to a location in cloud storage and then send a JSON message to the download queue (Specified by Uploader.UploadQueueName) pointing to that file. When you run Downloader.exe check the app.config to confirm where you want the app downloaded file root directory to be specified by Downloader.DownloadRootDir:

<add key="Downloader.DownloadRootDir" value="C:\temp\Downloads" />




The cool thing here is that when your run Downloader.exe, it will just hang around and just wait for messages from the download queue. The message is a simple .NET object that is serialized via good ol' JSON.NET:


public class FileUploadedMessage
    {
        public string FileName { get; set; }
        public DateTime UploadTime { get; set; }

        public string ContainerName { get; set; }
    }

 

In uploader.exe this is sent to the service bus like so:

static void SendFileUploadedMessage(string fileName, string containerName)
        {
            FileUploadedMessage message = new FileUploadedMessage()
            {
                FileName = fileName,
                UploadTime = System.DateTime.UtcNow,
                ContainerName = containerName
            };
            //Send the message up to the queue to tell the downloader to pull this file.
            //not async this will block but not a big deal.
            s_QueueClient.Send(new          BrokeredMessage(JsonConvert.SerializeObject(message)));
        }



The message obviously needs to happen after you place the file in the blob storage. Afterwards the Downloader.exe app can read the message and pull the data off blob storage and save it to a coresponding file:

var message = s_QueueClient.Receive();

                    //no new messages go back and wait again for more
                    if(message == null)
                    {
                        continue;
                    }

                    FileUploadedMessage uploadedMessage = JsonConvert.DeserializeObject<FileUploadedMessage>(message.GetBody<string>());

                    Console.WriteLine("Got uploaded file notification for " +uploadedMessage.FileName);

                    // Retrieve a reference to a container. 
                    CloudBlobContainer container = blobClient.GetContainerReference(uploadedMessage.ContainerName.ToLower());

                    //Get the cloud blob which represents the uploaded file
                    var blockBlob = container.GetBlockBlobReference(uploadedMessage.FileName);

                    //build the local file path based on the download root folder specified in .config file
                    var localFilePath = Path.Combine(s_DownloadRootFolder, uploadedMessage.ContainerName.ToLower());
                    var filePath = uploadedMessage.FileName.Replace("/", "\\");
                    localFilePath = Path.Combine(localFilePath, filePath);

                    if (!Directory.Exists(Path.GetDirectoryName(localFilePath)))
                    {
                        Directory.CreateDirectory(Path.GetDirectoryName(localFilePath));
                    }
                    //Replace the file on disk if the cloud uploaded file is in the same mapped location in the downloads directory
                    if (File.Exists(localFilePath))
                    {
                        File.Delete(localFilePath);
                    }

                    //Get the file
                    using (Stream cloudStoredBits = blockBlob.OpenRead())
                    using (FileStream fs = new FileStream(localFilePath, FileMode.CreateNew, FileAccess.ReadWrite))
                    {
                        Console.WriteLine("Downloading Cloud file [" + uploadedMessage.ContainerName + "]" + uploadedMessage.FileName
                            + " to " + localFilePath);
                        cloudStoredBits.CopyTo(fs);
                    }

                    //Delete it from blob storage. Cloud storage isn't cheap :-)
                    Console.WriteLine("Deleting Cloud file [" + uploadedMessage.ContainerName + "]" + uploadedMessage.FileName);
                    blockBlob.Delete(DeleteSnapshotsOption.IncludeSnapshots);


And just like that, anywhere in the world that uploader.exe was running, it can send files directly to your local server directory as if it was part of your local network.

Here is why this is incredibly powerful:

-No virtual machines here are used. Virtual machines are inherently expensive and will run you at least $15/mo on azure for the smallest instance.
-(Almost) no storage is really used. Since cloud redundant storage isn't exactly cheap, if you don't need to keep files up there, remove them.
-Service bus messages are NEARLY free (well, depending on how many message you send). They run at about $1.00/1 million messages.

Running this entire infrastructure should be nearly free at even modest amounts of load (assuming you delete files from cloud storage as I am doing).

It turns out that since we aren't really using much space since we delete files as we get them, and service bus messages are so dirt cheap you end up having a cloud service bill that would be a tiny fraction of doing something such as running a server. It is also just as reliable as a server and has the scalability of Azure.

I found this to be incredibly useful for on-premesis infrastructure (especially test infrastructure). Imagine you have a lot of on-premesis machines and you would like to test your product. You can distribute the work amongst those machines using messaging queues and package up the tests in temporary staging storage int the cloud. Tests can be unpacked with messages representing the test package and ran on each machine. Implementing such infrastructure in the past would have costed weeks of work just to get right.

You can also imagine how this could be useful for custom telemetry systems.