Steven Edouard

Developer Advocate. Tech Enthusiast. Life Enthusiast.

NAVIGATION - SEARCH

3 Simple things to save big on your Azure bill

If you're like myself, my first naïve attempt at a small indy cloud app was burning money faster than a bachelor party in Vegas. Here are a few tips that can save you big on your cloud apps:

 

  • If you already have a redundant data solution store your data on-premesis

You may already be at a small company with a data backup solution with associated infrastructure and such. One thing you may notice is that cloud data isn't super cheap. 1TB of locally redundant data may cost you upwards of $70/mo and even more for geo-redundant. If you already have your own storage infrastructure in place it may make more sense for you to use Azure cloud storage as sort of a temporary buffer, not a final storage place and then pull the data down on-premises.

 

Otherwise just be very mindful about what you store in the cloud and understand if you need to be able to access that data readily, from anywhere.

 

  • Pay close attention to your VM sizes for Worker/Web Roles

The most expensive thing in the cloud are compute hours. When you create a cloud service in Visual Studio, adding a worker/web role will default to a Small VM size. What it doesn't tell you is that this Small VM will set you back about $70 bucks a month.

 

 

Try out an Extra Small VM for size see if that fits your services needs and move up as appropriate.

 

Did you know, that a Small VM (2ghz, 1.75gb memory) cost about $70/mo vs an Extra Small at about $30/mo?

 

  • Combine worker roles into a single role instance

 

You shouldn't think of Azure worker roles as a single task in your app. Remember, each role => 1 VM and VMs aren't cheap! By default your starting code for a Worker role is really setup to do only one task forever:

 

 

 

Don't do this!

 

Having a single task that your worker leads to either a monstrously large and unorganized task or a worker for each simple task.

 

Instead, do this:

 

 

Sub-divide the worker role instance. Create an interface, call it IWorkerRole with a few interfaces to start, stop and run a subworker.

 

 

Now, implement this interface for however many (sub)workers you want like below:

 

 

In my example I have ProcessingWorker1, ProcessingWorker2 and ProcessingWorker3. You can imagine these things can be doing things such as caching, listening to a service bus or cleaning up a database.

 

*Note: Be sure to catch all exceptions (and report them to an error log trace) at the base of every OnRun() on the processing worker. If you don't do this, any one processing worker will take down the rest, putting a halt to any work until the entire instance restarts.

 

 Get the full template starter project here: https://github.com/sedouard/MultiWorkerRole

Using Services Buses with Cloud Storage to Transfer Files Anywhere

One thing I notice is the lack of awareness of cloud tools that can save your team a lot of time in infrastructure development for very little $$$.

There's all this hype of the cloud and how its the way of the future and this is true. But the cloud isn't a binary thing, you can use parts of it, all of it or just for certain applications.

Recently I had a small problem that I needed to solve. I needed to get data from customer and partner servers living a data center. I needed those files from the servers to get to our sever for diagnostics analysis. The data also needed to be sent securely.

Problem is, there was no (really) easy way to get their files from their server to mine. A few options were considered from using an FTP server (which is horribly unsecure) to SkyDrive and similar web apis. These options although feasible were clunky and lack some simplicity that I wanted.

Then I tried using Azure Blob Storage and Service bus. Azure blob storage is a place you can store any binary file data you want and the service bus allows you to send tiny messages between any computers connected to the internet (usually some serializable info in xml or JSON). These services have excellent .NET client libraries via NuGet (and for other langugages, too!) which takes a lot of work off your hands.

Fast forward about 4 hours of work later I have a sample solution to share. It uses 2 console apps:

 

Uploader.exe (of which there may be N instances running around the world.)

 

Downloader.exe. (In my situation I have just 1 running on my server)

 

I use these apps in batch automation to periodically send data files to my server.

 

https://github.com/sedouard/UploadAnywhere

Create your service bus queue and storage account. Once you do that you can use the console apps in this solution by changing the app configs:

-Downloader

  -App.config

and

-Uploader

  -App.config

<!-- Storage account connection to  service blob storage-->
    <add key="Uploader.Storage.ConnectionString" value="[YourStorageAccountConnectionString]" />
    <!--Service bus info to use to notify of file upload to downloader-->
    <add key="Uploader.ServiceBus.ConnectionString" value="[YourQueueConnectionString]" />
    <add key="Uploader.UploadQueueName" value="[queuename]" />

After changing the configs to your queue names you can use the uploader as such:

Uploader.exe [continername] [localfilename] [cloudfilename]

Its important to note that your container name must not have punctuations or capital letters. Although the app will auto-lowercase the name. The cloud file name should be something like folder1/folder2/myfile.txt

What will happen is that Uploader will first upload the file to a location in cloud storage and then send a JSON message to the download queue (Specified by Uploader.UploadQueueName) pointing to that file. When you run Downloader.exe check the app.config to confirm where you want the app downloaded file root directory to be specified by Downloader.DownloadRootDir:

<add key="Downloader.DownloadRootDir" value="C:\temp\Downloads" />




The cool thing here is that when your run Downloader.exe, it will just hang around and just wait for messages from the download queue. The message is a simple .NET object that is serialized via good ol' JSON.NET:


public class FileUploadedMessage
    {
        public string FileName { get; set; }
        public DateTime UploadTime { get; set; }

        public string ContainerName { get; set; }
    }

 

In uploader.exe this is sent to the service bus like so:

static void SendFileUploadedMessage(string fileName, string containerName)
        {
            FileUploadedMessage message = new FileUploadedMessage()
            {
                FileName = fileName,
                UploadTime = System.DateTime.UtcNow,
                ContainerName = containerName
            };
            //Send the message up to the queue to tell the downloader to pull this file.
            //not async this will block but not a big deal.
            s_QueueClient.Send(new          BrokeredMessage(JsonConvert.SerializeObject(message)));
        }



The message obviously needs to happen after you place the file in the blob storage. Afterwards the Downloader.exe app can read the message and pull the data off blob storage and save it to a coresponding file:

var message = s_QueueClient.Receive();

                    //no new messages go back and wait again for more
                    if(message == null)
                    {
                        continue;
                    }

                    FileUploadedMessage uploadedMessage = JsonConvert.DeserializeObject<FileUploadedMessage>(message.GetBody<string>());

                    Console.WriteLine("Got uploaded file notification for " +uploadedMessage.FileName);

                    // Retrieve a reference to a container. 
                    CloudBlobContainer container = blobClient.GetContainerReference(uploadedMessage.ContainerName.ToLower());

                    //Get the cloud blob which represents the uploaded file
                    var blockBlob = container.GetBlockBlobReference(uploadedMessage.FileName);

                    //build the local file path based on the download root folder specified in .config file
                    var localFilePath = Path.Combine(s_DownloadRootFolder, uploadedMessage.ContainerName.ToLower());
                    var filePath = uploadedMessage.FileName.Replace("/", "\\");
                    localFilePath = Path.Combine(localFilePath, filePath);

                    if (!Directory.Exists(Path.GetDirectoryName(localFilePath)))
                    {
                        Directory.CreateDirectory(Path.GetDirectoryName(localFilePath));
                    }
                    //Replace the file on disk if the cloud uploaded file is in the same mapped location in the downloads directory
                    if (File.Exists(localFilePath))
                    {
                        File.Delete(localFilePath);
                    }

                    //Get the file
                    using (Stream cloudStoredBits = blockBlob.OpenRead())
                    using (FileStream fs = new FileStream(localFilePath, FileMode.CreateNew, FileAccess.ReadWrite))
                    {
                        Console.WriteLine("Downloading Cloud file [" + uploadedMessage.ContainerName + "]" + uploadedMessage.FileName
                            + " to " + localFilePath);
                        cloudStoredBits.CopyTo(fs);
                    }

                    //Delete it from blob storage. Cloud storage isn't cheap :-)
                    Console.WriteLine("Deleting Cloud file [" + uploadedMessage.ContainerName + "]" + uploadedMessage.FileName);
                    blockBlob.Delete(DeleteSnapshotsOption.IncludeSnapshots);


And just like that, anywhere in the world that uploader.exe was running, it can send files directly to your local server directory as if it was part of your local network.

Here is why this is incredibly powerful:

-No virtual machines here are used. Virtual machines are inherently expensive and will run you at least $15/mo on azure for the smallest instance.
-(Almost) no storage is really used. Since cloud redundant storage isn't exactly cheap, if you don't need to keep files up there, remove them.
-Service bus messages are NEARLY free (well, depending on how many message you send). They run at about $1.00/1 million messages.

Running this entire infrastructure should be nearly free at even modest amounts of load (assuming you delete files from cloud storage as I am doing).

It turns out that since we aren't really using much space since we delete files as we get them, and service bus messages are so dirt cheap you end up having a cloud service bill that would be a tiny fraction of doing something such as running a server. It is also just as reliable as a server and has the scalability of Azure.

I found this to be incredibly useful for on-premesis infrastructure (especially test infrastructure). Imagine you have a lot of on-premesis machines and you would like to test your product. You can distribute the work amongst those machines using messaging queues and package up the tests in temporary staging storage int the cloud. Tests can be unpacked with messages representing the test package and ran on each machine. Implementing such infrastructure in the past would have costed weeks of work just to get right.

You can also imagine how this could be useful for custom telemetry systems.