Steven Edouard

Developer Advocate. Tech Enthusiast. Life Enthusiast.


A Day of Unity Coming Near You!

Interested in creating your own game with Unity and actually publishing it to make money? Microsoft is hosting A Day of Unity, a free event for beginning and advance developers to learn about unity and how to publish and make money on the Windows and Windows Phone stores.


Where and When?

Personally, you can find me at Day Of Unity 4/22 in Sunnyvale, CA. You can find a full list of locations here.

Why Windows?

The Windows Phone is outselling the iPhone in 24 markets and Windows 8 has more than 150 million licenses. With a less saturated store market place you stand a much better chance of producing revenue-generating games on Windows.

What’s going to be covered?

Session 1: Publishing Games on Windows

Learn the requirements you need to meet to publish your game on Windows Phone and Windows Store.

Unity Skill Level: Beginner and Intermediate

Session 2: Introduction to Unity

Experts from Microsoft and Unity will guide you through the Unity development toolset and teach you how to build a 2D game and export for Windows and Windows Phone. At the end of this session you will walk away with the fundamental knowledge to bring your own game ideas to life!

Unity Skill Level: Intermediate


Session 3: Demos, Voting & Prizes

In a series of fast-paced 2 minute demos, beginner and expert developers alike will demo their ported games. The best games will be decided by the audience!

What do you need?


  • A PC with Windows 8 or 8.1
  • Visual Studio 2012 or 2013
  • Unity 4.3.3+


But I only have a Mac:

  • No problem - shoot me a personal email before the event and I can get you setup with boot camp and get your Mac ready!



What are you waiting for? Register here!

Azure Mobile & MongoDB? : It's like Peanut Butter & Jelly!

Azure Mobile Services Git Deploy is a new feature of Mobile Services where you can start writing your scripts in the text editor of your choice and deploy by pushing your local repo to Azure.


Although this is cool, the really cool thing about this is that unlike before you can add ANY Node.js npm package to your repo. And you know what's really, really, cool? You can use MongoDB as an alternate to Azure Mobile SQL tables!



So let's get started:


1) Sign into your Azure Portal add a new mobile service:



2) Although we'll be using Mongo, every Mobile Service has an associated SQL database. So just use an existing one or create a new one for free.



3) After your mobile service is done being created, head over to the API tab. We need to make a custom API for clients to talk to our MongoDB:



Let's call this API 'todoitem'. For the purpose of this demo, we'll use 'Anybody with the Application Key' security permission. For actual mobile clients I would recommend using 'Only Authenticated Users' to keep your database secure.


     We will implement the GET and POST API's



4) Before mucking with our scripts, lets setup source control so we can get the right stuff needed to connect to Mongo. Head back to the Mobile Service Dashboard:




    Notice the 'Setup source control' on the dashboard. Go ahead and click it.

5) The Portal will automagically take you to the Configure tab, but we gotta set up our source control credentials. Azure generates a default login but I found it to be confusing as to what it actually is.

    Go back to the Dashboard tab Click 'Reset your source control credentials'. Enter a user name and password for your Git account. (Note, you can't use your Micrsofot Account credentials)





6) Hop back to your Configure tab and grab the Git URL:



7) Now to clone the repo! Use your favorite git client and do the following commands in your shell:


>git clone <your git repo link>
>Username for '<your git repo link>': <your git username from step 5>
>Password for '<your git repo link>': <your git password from step 5>

8) Navigate to <your service name>/service folder in your local repo. You should see a folder layout as follows:


As you can see, azure setup a nicely organized repo for you with your new 'todoitem' api, and a JSON config file for it.

For this demo, we'll use the mongoose Node.js driver. Install Mongoose by navigating to <your service name>/service and running the following command:

>npm install mongoose

If all goes well, your output should end with the package installation similar to this:



9) As of writing this post, you can't run azure mobile services locally on the emulator, so adding the npm package actually doesn't really do much but mirror what your setup looks like in the cloud. The important thing to do here is open package.json. Ensure that you add mongoose as one of your dependencies:


(I found that npm didn't do this automatically in my azure repo)


10) Now that we've got our database driver, how the exactly do we get our Mongo database? Luckily to make things easy we have Azure add-ons where we can find MongoLab. (Obviously you don't have to use MongoLab's MongDB, but for this demo that’s what I'll use)

To add MongoLab jump back to your portal, click the plus on the bottom left and select 'Store':



Pick a sandbox account (which is free) and give it a name. (Make sure to use the same region as your mobile service for best performance!):



Complete your 'Purchase' and you'll see that you have a MongoLab mongodb in your Add-on's section:



11) Select Manage which will open up your MongoLab Portal. Select 'Add Collection':



Call it todoitems:



Great! Now we've got a place to stash our todoitems for our app!

12) Head back to the portal on and copy the connection string from your Add-on's page by clicking 'Connection info':



13) Jump over to your Mobile service Configure page, and scroll toward the bottom to app settings. This is a place we can privately put sensitive info like the connection string we copied from MongoLab without placing it directly in our code. Add the setting 'MongoConnectionString' to your app settings and copy the connection string from the previous step as the value.





This makes it easy to share your code on github or with other collaborators. Also, if you prefer to use a configuration file, check out my post on using JSON config files instead.


14) Now, let's write the POST /api/todoitem API. Go to the API you created in Step 3 and insert the following:



The scope of this post isn't to necessarily show you how to use mongoose, but do note that you should not compile your mongoose model within the api handler but in the global scope so that it is only compiled once. The model defined by todoItemSchema will associate Todo items and Users. We use the process.env object to get to the connection string we placed in Step 13 to open the connection to mongo and place the JSON object directly in the database. I should caution that it may be a good idea to validate the JSON you recieive for defensive purposes.


After your done writing the API implementation go ahead and commit to your repo:

>git commit .
>git push origin master

Now here's the cool part! This will trigger the installation of the npm packages in the remote repo and the deployment to the mobile service. You should see your git console output the installation of mongoose and its dependencies and that the deployment was successful:



15) Now let's test our POST api! I'll use the Advanced Rest Client chrome extension since its super handy in debugging REST API's. You can do the following request:

POST https://<yourservicename>





{"category":"MustDos", "description":"Make more apps!", "user": { "id":"Facebook:2432423", "name": "Steven Edouard" }}


Success! The API returned the mongo-generated id of the created object. We've written to our MongoDB from our Azure mobile app! Now, can we get the Mongo object back and party on that?


16) Get back to your text editor and add the implementation for the GET api:




 This script returns us the TodoItem whose id matched the itemId url query argument.

 Let's test it out! Do the following call:

 GET https://<yourservicename> <itemId returned from step 15>



Huzzah! We got our mongo object back!


Now you can successfully integrate with mongo in our clients using authenticated apis with MongoDB. Why is this so awesome? As a developer now YOU have more choices in how you can store your data with the ease of using azure mobile services. Depending on your application, scale and costs this gives you a great alternative to Mobile Services SQL Tables.


You can find the finished code for this service at:


Happy Coding!




Using JSON Configuration files for Azure Mobile Services

If you log into your Azure Mobile Services (AMS) portal, you'll notice that you can now deploy your service via Git. This is an awesome feature that makes life way easier. Scott Guthrie has a really good post on it.


I'm used to building full fledged cloud services (with worker roles, web roles etc). With AMS, you get a bit more platform, and not as much knowledge about the infrastructure compared to other PaaS services like web and worker roles. Which is great but really became a pain when I was trying to use configuration files with my mobile services.

Most people like to keep their settings like access keys, connection strings, account settings etc. in one place. And if you're using Node.js for your mobile services (which I'm really becoming a fan of by the way!) you probably want to have some sort of JSON configuration like this:



Personally I like to use nconf to read my configuration file because its really straight forward. The problem is that azure mobile doesn't provide you with very much information with the environment in which your code is running. That's the point of PaaS... isn't it? :-)

Well anyways if you follow the how-to on the nconf npm page you'll run into a problem. Say that you have a custom api like so:

nconf.argv().env().file({ file: '../shared/config.jsn' });
exports.get = function(request, response) {
    // Use "request.service" to access features of your mobile service, e.g.:
    //   var tables = request.service.tables;
    //   var push = request.service.push;
    var accountName = nconf.get('Storage.AccountName');
    console.log('Connecting to blob service account: ' + accountName);


And that you place your config.jsn in your shared folder under <ProjectRoot>/shared/config.jsn you'll be surprised when you look at your log output:

'Connecting to blob service account: undefined'


So what's going on here? It turns out (after a good amount of testing what worked and what did not) that there's something funky going on with the current executing directory. So instead, if you want to get to the configuration file, use __dirname + '/path/to/configfile.jsn'

Note: As unintuitive as it sounds, don't name your config file with the extension '.json' or azure will confuse that with a route configuration. It will break your apis with mysterious 500 errors.

So the working verison of the example above (assuming you placed your configuration file in the shared folder) is:

nconf.argv().env().file({ file: __dirname + '/../shared/config.jsn' });
exports.get = function(request, response) {
    // Use "request.service" to access features of your mobile service, e.g.:
    //   var tables = request.service.tables;
    //   var push = request.service.push;
    var accountName = nconf.get('Storage.AccountName');
    console.log('Connecting to blob service account: ' + accountName);


After this everything seems to be in working just fine. So to sum it up, to use json config files in your azure mobile service:

1) DON'T name the config file with the extension .json (it'll make you get unexplained 500's on calls to your apis')
2) DO use the __dirname special node.js property to make the file path absolute at run-time.

Hope this saves you some time!

'Til next time!


Take over 800 high school students, a slew of tech companies, an open 24/7 building, and a ton of caffeine. What do you get?


One of the biggest (and youngest) hack-a-thons on the West Coast!


This past weekend I had the opportunity to support the team at HSHacks, a 2 day long hack-a-thon hosted by Paypal HQ for a LOT of high school students in the bay area. Students came from near and far to participate in the event. Each of them were offered a variety of technologies to develop on from Andriod, to the Pebble watch to Microsoft tech like Touch Develop.


The participating students came from a variety of backgrounds and brought their own wide spectrum of devices, PC's, Mac's and even a few Ubuntu machines floating around.



Naturally we set up a pretty sweet device bar with all the Microsoft gadgets you can think of:



Our App lab room focused on Construct2 and TouchDevelop which are two great platforms for beginner coders to hack games and apps for. Each of the platforms allow students to eventually export their apps to the Windows Store.


(students learning how to use MakeyMakey with TouchDevelop)

We even brought in a few MakeyMakey controllers which are simple arduino devices that emulate keyboard presses by closing arbitrary circuits (think positive terminal on a banana, negative on your wrist and close the circuit with a finger tip). This video illustrates what I mean:





We took the ideas in the Makey Makey Video to create some make-shift foot pads for gaming:



Students could throw in a couple 'on key pressed' event handlers in their apps:



And all the sudden we have a creative controller for their apps!




We had that too! The top winning apps were:


  1. A 'Gravity Movie' themed game where the player could 'slingshot' from one interstellar body to another
  2. A pack of Makey Makey touch develop games
  3. 'Star Wars' HTML/5 Javascript Native Win 8 game similar to flappy bird but vertical



Winners took home Xbox Ones, Nokia phones and a Windows 8 laptop!


Overall I was very impressed about how many of these Students (of whom are ages 13-16) managed to code impressive experiences in such a short amount of time. This highlights how far we have come but how much more we have to go in getting CS education into schools. Of every group I talked to only about 1/3 of the students had good CS education and that was a combination of classes and self-teaching.


The best thing that came out of this event was the amount of learning that was distributed to such a large volume of students, from learning how to do web requests to running code on mobile and even embedded devices. 


Next month Microsoft will be sponsoring LA Hacks another Student ran hackathon ( April 11-13. Keep a look out for us there!

What's it like being an SDET at Microsoft?

You really have to admit that the role of a software 'tester' doesn't sound as sexy as other developer jobs. In fact that's probably why the official title of 'testers' at Microsoft is 'Software DEVELOPMENT ENGINEER in Test' to convey that these folks are just as capable of developers as your standard software developer. Microsoft has truly innovated on this role in a way that most other software firms haven't. So much so, that in the past few years the 'SDET' role has just started spreading out to other reputable organizations such as Google and Amazon.

I spent a little over 2 years as a tester at Microsoft within the .NET Runtime (Common Language Runtime for you .NET fanboys out there) right after I got my Computer Engineering undergraduate degree. During that time Microsoft has churned the organization heavily as you may have seen in the tech blogs. With that churn, the role of the test position has shifted as well.

As a Microsoft SDET my job really wasn't to ensure that a particular API behaved in a certain way or that the unit test coverage of a certain feature was implemented. Our developers were responsible for doing that work. Instead the SDET work focused on test tooling such as static validation tools and sophisticated, reliable infrastructure that would run end-to-end and unit level tests that drove developer productivity. Developers were responsible for ensuring that the quality of the code they checked in was up to snuff. Testers like myself ensured that the holistic quality of the product was sound. This could be anything from harnessing apps to creating sophisticated semantic validation tools that could determine if two pieces of generated code were the 'same'.

Now, don't get me wrong my SDET experience may be differ from teams with products with faster cycles like online services but the theme remains the same. Testers at Microsoft are not simply 'point-n-click' testers. They do a lot of really sophisticated things and focus on ensuring the quality of the customer scenarios in the product. This is a lot more fun than what the title of 'tester' conveys.

What are the skills required? In general I would say it's nearly the same as any highly talent developer with an increased emphasis on creativity. You gotta have that creative edge to think about really cool ways to validate the product and even validate things like the usage of a feature through things like telemetry systems.


Intro to App Building course w/ Touch Develop



So a couple months ago I had the pleasure of filiming a digital literacy course on intro to app building with Touch Develop. That course Microsoft just released and you can feel free to peruse it here!


Happy Hacking!

3 Simple things to save big on your Azure bill

If you're like myself, my first naïve attempt at a small indy cloud app was burning money faster than a bachelor party in Vegas. Here are a few tips that can save you big on your cloud apps:


  • If you already have a redundant data solution store your data on-premesis

You may already be at a small company with a data backup solution with associated infrastructure and such. One thing you may notice is that cloud data isn't super cheap. 1TB of locally redundant data may cost you upwards of $70/mo and even more for geo-redundant. If you already have your own storage infrastructure in place it may make more sense for you to use Azure cloud storage as sort of a temporary buffer, not a final storage place and then pull the data down on-premises.


Otherwise just be very mindful about what you store in the cloud and understand if you need to be able to access that data readily, from anywhere.


  • Pay close attention to your VM sizes for Worker/Web Roles

The most expensive thing in the cloud are compute hours. When you create a cloud service in Visual Studio, adding a worker/web role will default to a Small VM size. What it doesn't tell you is that this Small VM will set you back about $70 bucks a month.



Try out an Extra Small VM for size see if that fits your services needs and move up as appropriate.


Did you know, that a Small VM (2ghz, 1.75gb memory) cost about $70/mo vs an Extra Small at about $30/mo?


  • Combine worker roles into a single role instance


You shouldn't think of Azure worker roles as a single task in your app. Remember, each role => 1 VM and VMs aren't cheap! By default your starting code for a Worker role is really setup to do only one task forever:




Don't do this!


Having a single task that your worker leads to either a monstrously large and unorganized task or a worker for each simple task.


Instead, do this:



Sub-divide the worker role instance. Create an interface, call it IWorkerRole with a few interfaces to start, stop and run a subworker.



Now, implement this interface for however many (sub)workers you want like below:



In my example I have ProcessingWorker1, ProcessingWorker2 and ProcessingWorker3. You can imagine these things can be doing things such as caching, listening to a service bus or cleaning up a database.


*Note: Be sure to catch all exceptions (and report them to an error log trace) at the base of every OnRun() on the processing worker. If you don't do this, any one processing worker will take down the rest, putting a halt to any work until the entire instance restarts.


 Get the full template starter project here:

Typescript: My Windows 8 App Experience

I decided to take a peek at the new javascript-compiling language TypeScript released by Microsoft (in a preview state) in late 2012. Basically, Typescript brings some static validation to application developers similar to languages like C#, Objective-C, C++ etc. As my disclaimer describes above I'm not too familiar with JavaScript, so naturally I decided to tackle TypeScript and JavaScript at the same time :-).


I have an existing Cloud app called GiftMe ( exposed over a RESTful interface (written with Web API). I decided it would be good to write an installed app for Windows 8.1 and even more so, writing a HTML5/Javascript app with TypeScript.




I was able to actually implement and submit the app to the store in 3 days (granted, the app is simple, it has the same feature set as the Web version). You can download GiftMe for Windows 8.1 here:


You can also find the GiftMe client app source code here:


GiftMe will allow you to sign in exclusively with Facebook and collect information about your friends to provide you with customized gift suggestions for those friends. The client application calls the web service to search for friends, and get gift suggestions for them.


This post is primarily about my experience using TypeScript and not too much about my experience specifically with Windows 8.1 WinJS/HTML5 app development (that'll come in a later post).


TypeScript Wins:


-Awesome VS Intellisense Integration

-Use javascript (when you want to)

-Very easy language to pick up


TypeScript Shortcomings:


-Type Definitions not always in sync with Javascript libraries

-No Typescript debugging in Visual Studio

-Getting setup with VS was quirky



Very easy language to pick up has a lot of great examples and understanding TypeScript syntax is really a breeze. If your coming from javascript, c# or c++ you'll find it pretty easy. For example here's a definition I have for the GiftMe client libarary:



Coming from C# there are some minor differences, such as typenames coming after variables but these are really easy to pick up. The above example explicitly defines a class with a member variable, constructor and instance method with defined typed parameters. Doing this in pure JS is really sort of awkward in my opinion.


You can also create interfaces (for example, for a particular JSON object):



Typescript interfaces are almost the same as traditional interfaces in strongly typed languages except one cool thing you can do is use it to define JSON objects. I found this pretty handy and actually didn't end up implementing any of the interfaces you see above explicitly.

Interfaces are what '*.defintelytyped.ts' files use to give types to common libraries such as jQuery. You can find these on Nuget or Github. This allows you to use typescript on popular, already existing javascript libraries. Creating 'typings' like this makes it very easy to create an excellent intellisense experience similar to .NET and C++.


Awesome VS Intellisense Integration


Extending from the previous point intellisense is much better when you have types:

This is the 'bread and butter' feature of Typescript. Coming from .NET development, it always makes me feel uneasy calling api's where all the parameters are just 'var'. How do I know I'm passing the right thing? This really solves that problem.

Although JavaScript intellisense files help out with standard libraries, it doesn't give you that rich type information that you need to be productive. This is almost reason alone to jump to TS.


Use javascript (when you want to)


This has to be one of the best things about the language. Because Typescript compiles down to javascript you can use javascript almost anywhere you want.



Here's an example of calling one of the GiftMe client library apis passing a callback with the Typescript lambda expression.



Now here's the same thing using the javascript 'function' keyword. Notice how VS and typescript understand the types of succeed and results.


This behavior is really good because when you look up example code in javascript, you can easily bring it right into your Typescript and optionally "type it out" if you want to. It also allows you to take as much or as little Typescript into your project at once. Overall I found this to make things very flexible to fit my needs.


No TypeScript debugging in Visual Studio!!!




I gotta say, I was really disappointed when I realized the Typescript sdk didn't let you debug in Typescript.


This has to be the BIGGEST drawback. As an app developer, to complete the experience in any programming language, you should be able to debug that code directly, not only its compiled form (javascript). Trying to lay a breakpoint down in a typescript file fails to bind because the Typescript code is never associated with the code running in the javascript engine.


Granted, there are posts online about how you can use some compiler generating mappings between javascript and Typescript to debug in Firefox/Chrome/IE but you can't do this in Visual Studio with a Windows 8 WinJS app.



For the purposes of GiftMe, it wasn't too much of a big deal to just debug the javascript (partially because a lot of the giftme typescript has javascript in it, see Use javascript when you want to).


This would lead to one common mistake I made at least two dozen times. When you debug in javascript, you want to make your fixes in javascript. This really pushed me to want to use javascript more without typescript because it almost felt like the typescript compilation step was sort-of 'getting in the way'.



Type Definitions not always in sync with Javascript libraries


One catch that you have to know about TypeScript is that you can't actually use an existing javascript library  (in the Typescript language) without a typescript file called 'defintelytyped'. This file bascially contains all the interface declarations of that library. The problem is is that those interface declarations must keep in sync to the actual javascript. The typescript guys have done a good job of bringing in the community here to help keep popular libraries up-to-date. You can find these libraries on Github or Nuget package manager.


Even with these files that are supposed to be up-to-date, I still ran into issues with the build in javascript library for Windows 8:



In this case I pulled the definatley typed library for WinJS from the official codeplex repository here. Even with what appeared to be the latest file, Visual Studio still reported errors like the one above. You can check the MSDN docs, this api does exist. The code also runs just fine.

Unfortunately this was very prevalent in my entire solution:



Getting setup was Quirky


My setup is VS2013 with the Typescript SDK. I found it very irritating to get typescript setup for a Windows 8.1 app. Unfortunately no Typescript templates exist for Windows 8.1 Javascript/HTML5 apps. In VS2013, Typescript files would always compile javascript on saving which caused 'do you want to reload this file' prompts if I had the .js file opened (which I would because thats the only way to debug). I probably saw the above dialog image hundereds of times which got annoying.

Overall I would say the VS2013 integration experience isn't as stellar as I would have liked it to be.

So... should you use TypeScript?

Although GiftMe is a very small application, it does bring about some obvious advantages and some of the current kinks to Typescript. In general I would say you should definitely consider Typescript if:

-You use Visual Studio for your JS development AND

-You have a large rather complicated application where code maintenance costs are high. (Think about how much bug fixes cost)


If you don't fall into that bucket, Typescript is a really good nice to have. You can do without it but it its much nicer to have it. Since any javascript is valid Typescript, if you enable it in your project, you can choose to take as much or as little Typescript as you want in your project. So really there isn't any commitment here.

Hope this helps!




Using Services Buses with Cloud Storage to Transfer Files Anywhere

One thing I notice is the lack of awareness of cloud tools that can save your team a lot of time in infrastructure development for very little $$$.

There's all this hype of the cloud and how its the way of the future and this is true. But the cloud isn't a binary thing, you can use parts of it, all of it or just for certain applications.

Recently I had a small problem that I needed to solve. I needed to get data from customer and partner servers living a data center. I needed those files from the servers to get to our sever for diagnostics analysis. The data also needed to be sent securely.

Problem is, there was no (really) easy way to get their files from their server to mine. A few options were considered from using an FTP server (which is horribly unsecure) to SkyDrive and similar web apis. These options although feasible were clunky and lack some simplicity that I wanted.

Then I tried using Azure Blob Storage and Service bus. Azure blob storage is a place you can store any binary file data you want and the service bus allows you to send tiny messages between any computers connected to the internet (usually some serializable info in xml or JSON). These services have excellent .NET client libraries via NuGet (and for other langugages, too!) which takes a lot of work off your hands.

Fast forward about 4 hours of work later I have a sample solution to share. It uses 2 console apps:


Uploader.exe (of which there may be N instances running around the world.)


Downloader.exe. (In my situation I have just 1 running on my server)


I use these apps in batch automation to periodically send data files to my server.

Create your service bus queue and storage account. Once you do that you can use the console apps in this solution by changing the app configs:






<!-- Storage account connection to  service blob storage-->
    <add key="Uploader.Storage.ConnectionString" value="[YourStorageAccountConnectionString]" />
    <!--Service bus info to use to notify of file upload to downloader-->
    <add key="Uploader.ServiceBus.ConnectionString" value="[YourQueueConnectionString]" />
    <add key="Uploader.UploadQueueName" value="[queuename]" />

After changing the configs to your queue names you can use the uploader as such:

Uploader.exe [continername] [localfilename] [cloudfilename]

Its important to note that your container name must not have punctuations or capital letters. Although the app will auto-lowercase the name. The cloud file name should be something like folder1/folder2/myfile.txt

What will happen is that Uploader will first upload the file to a location in cloud storage and then send a JSON message to the download queue (Specified by Uploader.UploadQueueName) pointing to that file. When you run Downloader.exe check the app.config to confirm where you want the app downloaded file root directory to be specified by Downloader.DownloadRootDir:

<add key="Downloader.DownloadRootDir" value="C:\temp\Downloads" />

The cool thing here is that when your run Downloader.exe, it will just hang around and just wait for messages from the download queue. The message is a simple .NET object that is serialized via good ol' JSON.NET:

public class FileUploadedMessage
        public string FileName { get; set; }
        public DateTime UploadTime { get; set; }

        public string ContainerName { get; set; }


In uploader.exe this is sent to the service bus like so:

static void SendFileUploadedMessage(string fileName, string containerName)
            FileUploadedMessage message = new FileUploadedMessage()
                FileName = fileName,
                UploadTime = System.DateTime.UtcNow,
                ContainerName = containerName
            //Send the message up to the queue to tell the downloader to pull this file.
            //not async this will block but not a big deal.
            s_QueueClient.Send(new          BrokeredMessage(JsonConvert.SerializeObject(message)));

The message obviously needs to happen after you place the file in the blob storage. Afterwards the Downloader.exe app can read the message and pull the data off blob storage and save it to a coresponding file:

var message = s_QueueClient.Receive();

                    //no new messages go back and wait again for more
                    if(message == null)

                    FileUploadedMessage uploadedMessage = JsonConvert.DeserializeObject<FileUploadedMessage>(message.GetBody<string>());

                    Console.WriteLine("Got uploaded file notification for " +uploadedMessage.FileName);

                    // Retrieve a reference to a container. 
                    CloudBlobContainer container = blobClient.GetContainerReference(uploadedMessage.ContainerName.ToLower());

                    //Get the cloud blob which represents the uploaded file
                    var blockBlob = container.GetBlockBlobReference(uploadedMessage.FileName);

                    //build the local file path based on the download root folder specified in .config file
                    var localFilePath = Path.Combine(s_DownloadRootFolder, uploadedMessage.ContainerName.ToLower());
                    var filePath = uploadedMessage.FileName.Replace("/", "\\");
                    localFilePath = Path.Combine(localFilePath, filePath);

                    if (!Directory.Exists(Path.GetDirectoryName(localFilePath)))
                    //Replace the file on disk if the cloud uploaded file is in the same mapped location in the downloads directory
                    if (File.Exists(localFilePath))

                    //Get the file
                    using (Stream cloudStoredBits = blockBlob.OpenRead())
                    using (FileStream fs = new FileStream(localFilePath, FileMode.CreateNew, FileAccess.ReadWrite))
                        Console.WriteLine("Downloading Cloud file [" + uploadedMessage.ContainerName + "]" + uploadedMessage.FileName
                            + " to " + localFilePath);

                    //Delete it from blob storage. Cloud storage isn't cheap :-)
                    Console.WriteLine("Deleting Cloud file [" + uploadedMessage.ContainerName + "]" + uploadedMessage.FileName);

And just like that, anywhere in the world that uploader.exe was running, it can send files directly to your local server directory as if it was part of your local network.

Here is why this is incredibly powerful:

-No virtual machines here are used. Virtual machines are inherently expensive and will run you at least $15/mo on azure for the smallest instance.
-(Almost) no storage is really used. Since cloud redundant storage isn't exactly cheap, if you don't need to keep files up there, remove them.
-Service bus messages are NEARLY free (well, depending on how many message you send). They run at about $1.00/1 million messages.

Running this entire infrastructure should be nearly free at even modest amounts of load (assuming you delete files from cloud storage as I am doing).

It turns out that since we aren't really using much space since we delete files as we get them, and service bus messages are so dirt cheap you end up having a cloud service bill that would be a tiny fraction of doing something such as running a server. It is also just as reliable as a server and has the scalability of Azure.

I found this to be incredibly useful for on-premesis infrastructure (especially test infrastructure). Imagine you have a lot of on-premesis machines and you would like to test your product. You can distribute the work amongst those machines using messaging queues and package up the tests in temporary staging storage int the cloud. Tests can be unpacked with messages representing the test package and ran on each machine. Implementing such infrastructure in the past would have costed weeks of work just to get right.

You can also imagine how this could be useful for custom telemetry systems.