Windows Azure Feed

October 28, 2013

What exactly is a Service?

ServicebuttonWith the advancement in cloud technologies more and more companies are getting on the Anything-as-a-Service train but over the years the term services became so overloaded that people are having hard time understanding what it means. As any other technology term you hear lately some clarification may be required to understand what the person in front of you meant with "I sell services".

According to Wikipedia's definition of service (as a system architecture component) it is a set of related software functionalities that can be reused for different purposes, together with the policies that should control its usage. In today's cloud environment I would add two more things to the services definition:

  • Those functionalities must be exposed either through interoperable APIs or accessible via browser (i.e. must not be bound to a particular implementation platform)
  • And they must be accessible over the network (i.e. can be accessed remotely)
Although those characteristics should be enough to define what a service is, we really complicate the matter by thinking that everything that can be accessed over the network is a service. Well, for decades we've been accessing databases over the network - is it true to say that traditional databases are services? Comparing with the definition above the answer is "yes": it can be used for storing data for different purposes, one can use ODBC to access it from various platforms and languages and it is accessible over the network. Does that mean that by running my single instance DB on my home computer makes me Database-as-a-Service (DBaaS) provider? Not really! Here are few more things that we need to consider when we talk about services:
  • Services are normally exposed to the "external" world. What this means is that you offer the services outside your organization. Whether this is outside your team, your department or your company it is up to you but you should consider services the offering that generates business value for your organization.
  • There are also multi-tenant - this means that the services you offer can be consumed by multiple external entities at the same time without any modifications.
  • They are always up - third party businesses will depend on your services and you cannot afford to fail them hence avoiding single point of failure is crucial for the success of services
  • Last but not least services must be adopted - if you do not drive adoption through evangelizing, partnerships, good documentation, SDKs etc. the services you offer will not add value for your organization

Transitioning from a traditional software product organization to a services organization requires lot of effort and cultural change, and the best way to approach it is to clearly define the basics from the beginning. 

August 07, 2013

How IT Pros Are Destroying Their Own Jobs

Win-Lose-DiceNot surprising to me my yesterday's post Why Shutting Down TechNet is Not a Problem for IT Pros sparked quite passioned comments from IT Pros. I have to admit - I was wrong! Shutting down TechNet is a problem for IT Pros! But not because they will lose the ability to install software for free but because the hater that this event sparked among them hinders their abilities to look beyond this immediate issue and consider the change they need to make.

From the comments 30% contained rage against the cloud, 30% against the developer and 30% described how much IT Pros' job is to install, maintain and troubleshoot servers and environments or plainly said how much they love to hug their machines. The remaining 10% were valid concerns that can be summarized under 1.) cloud environments are hard to configure and 2.) Microsoft is acting rude.

Let me first address the 10%!

Cloud computing as a concept is not new except maybe the name. However the software automation that cloud environments achieve was not available several years ago. Bringing up a full application stack required several hours if not days for the IT Pro in the past while now it is available with the click of a button. Whether the cloud will live to its promise or not only the time will show but one thing is for sure - more and more automation will be added, which will require less and less need to perform the current admin tasks.

Regarding Microsoft and whether they've been acting rude this should not surprise anybody. They (still) have power and they decided to exercise it. I will emphasize once again though - I don't think shutting down TechNet is such a big problem! There are other ways to get Microsoft software for evaluation (note: not production usage) and I truly believe that if Microsoft wants to stay relevant and if they are true to their "devices and services" strategy they need to make their software affordable for evaluation. If not, as one of the comments said - there is always Linux.

The remaining 90% though are the ones that worry me about the IT Pros. For people who will always be on the liabilities side of the balance sheet they should pioneer the cloud and not blindly claim its uselessness. Throughout the comments I noticed that certain professionals do not even clearly understand the basic cloud concepts (like public and private) and what scenarios those can enable. There are numerous examples where IT organizations embrace the cloud and not only keep their jobs but become the Achilles heel of the enterprise.

Which brings me to the second point - the hate against developers and in this capacity the Lines of Business (LOBs). Everybody who works in a company that has at least one IT guy is aware of the tensions between IT and the "others". And being realistic if the IT Pro needs to serve several masters (developers, users and maybe customers) and if it takes weeks if not months to gets servers provisioned neither him nor the "others" will be happy. The solution for the IT Pro guy is to become more nimble, more agile. Partnering up with the business groups and development teams instead complaining will bring them more success and fame.

For the remaining 30%, the people who want to hug their servers my only advice is to let it go. Unless you feel weird satisfaction by installing the same software again and again you need to move on and start bringing value to the table in the form of fast and flexible solutions.

As I mentioned in my previous article - it is time for IT Pros to change unless they want to become extinct.

August 06, 2013

Why Shutting Down TechNet is Not a Problem for IT Pros?

Change-sameWhile reading the news yesterday I stumbled upon the following article in Puget Sound Business Journal - Why is Microsoft alienating its biggest customers? IT pros want TechNet back. Everybody has the right to complain and sign a petition but more important is to understand the message Microsoft sends. Some think of it as "Microsoft doesn't care about IT Pros anymore", and they may be right; but the message sounds to me more like "Hey, IT Pros - the world is changing!" Although I think Microsoft could be a little bit more responsive to the complaints, I don't think IT Pros should be so worried. Here is why.

The Problem With The Downloads

While $349 annually for the whole collection of Microsoft software is a very attractive price I think free software is a better option. Although slow, Microsoft showed its commitment to change in the last few years. Although I don't think that Microsoft will ever release Windows (client or server) under Apache license they will continue to provide Beta versions for evaluation for free. 
Next, the price Microsoft charges for software will continue to get lower. Just compare how much you paid for Windows 7 license and how much you paid for Windows 8 license - quite significant difference. I do expect the same to happen to other products that are in the consumer category (Office at least).
Last, if you still insist to have unlimited downloads of everything Microsoft then you can subscribe for MSDN. Yes, it is a few hundred dollars more annually but you also get more value from it and… wait! you can now claim yourself as a developer!

The Problem With The Installations

I will admit that I do install software for evaluation quite often. And I have to admit that I hate it! Installing and configuring of software is a huge waste of time if your end goal is to see whether it will work or not. I would rather click a button and have everything I need running in few minutes without the need to download/install/configure. And this is one of the promises of the cloud - you can get the software you need up and running in minutes, do your testing and move on. Well, it may cost few bucks to run it for a day but it is not such a big deal. And, who knows - Microsoft may decide to offer free compute time for evaluation purposes. 

The Problem With The IT Pros

The biggest problem I think though is the IT Pros themselves. They still look at their jobs and responsibilities as the people who install software. It is time for IT Pros to understand that in is near the day when software will install the software, and they need to think how to position themselves in this environment. The best option for them is to work closely with the Business Groups and provide the IT services needed to support the business or to transition to a DevOps role that again will provide value for the business.

It is clear that Microsoft understands that the world is changing and the IT as it used to be is nearing its end. It is time also for the IT Pros to understand that just installing software is not a value proposition in the enterprise.

July 08, 2013

Is the Cloud the Right Solution for Your App?

Image51It is surprising to see how every new technology suddenly becomes the thing that will solve every problem. The cloud is no different. Everybody rushes to migrate their applications to the cloud because they think this will magically make them faster, cheaper, agile, competitive and… add any other buzzword that comes to mind. Well, not so fast! You don't need to move every single application to the cloud! Or at lest not in its current state.


There are thousands of articles on Internet that discuss what applications are applicable for the cloud (including several that I have written) and what are not. There are also thousands of articles discussing the different cloud technologies and how they can be used. Unfortunately what I have seen in the last four years is that people don't pay attention to those writings and choose applications that are not applicable for the cloud or severely misuse the technology. 


Take for example the following scenario. The IT team at a very large enterprise creates virtual machines on AWS for each analyst, installs desktop software that uses local database (file based) and let them use those at their discretion. This is certainly enough to have a checkbox next to "Migrated application to the cloud" on your performance review but is this the solution? First, let's look at what the problems with this "migration" are:

  • Because the VM is used by a single user it's utilization will be no more than 8h a day or 30%. And this is only if this analyst doesn't take any breaks, which means that the utilization of the VM will be much lower. The goal of the cloud is to maximize the utilization of your infrastructure and improve the efficiency and I am not talking from cloud vendor point of view.
  • If you make a simple calculation you will find out that throughout the year they will pay Amazon amount that equals to pretty decent notebook. But wait, the analyst already has a notebook! And she or he uses this notebook to connect to the VM. Hence they are paying double for something that they already have - why don't use the analysts notebook to install the software there. (I know there are certain reasons why you don't want to do that but just bare with me).
  • The user experience will be suboptimal when Amazon decides to move your VM from one node to another. This will result in interruptions, possible loss of data and low satisfaction from the analysts.
  • Although they can create VM template managing all those VMs is a tedious task. You need to go patch them, update the template, troubleshoot them etc. that will increase their management costs.
  • The part that mostly bothered me in this scenario was that the technology for solving this problem has been available for long time - it is called Terminal Services and there are few established vendors that made business of that long before the cloud term was even conceived.


Now, if you really want to make an impact in your organization you should approach the problem differently. The  questions that you need to ask are: 

  • How can I increase utilization?
  • How can I decrease spending?
  • How can I allow centralized management?


Implementing Terminal Services in your organization may be the simplest answer. As mentioned the technology is available for long time, it is proven and it is easy to implement. If you want to show that you are up to date with the latest developments and mention cloud on your resume look at VDI (or Virtual Desktop Infrastructure). Last but not least if you want to prepare your application for the future consider reworking it into a modern web application that uses central database, services and multi-tenant UI.


As it seems the cloud can be solution for your app but you will need to put the effort to find the right way.

June 24, 2013

Is Your Cloud Ready For The Enterprise?

Ready-set-goReading my newsfeed this morning I noticed several articles talking about the cloud and the enterprise. There is no doubt that the area is heating up with more and more acquisitions (IBM buys Softlayer), investments (GE invests $105M in Pivotal) and fights over big deals (IBM vs. AWS for the CIA cloud) but the question that comes up is: "Are the cloud platforms ready for the enterprise?"


Being involved with numerous cloud projects I see five areas that enterprises emphasize when they evaluate their options. Those are not too different from the criteria they use for any other software offering but here is the cloud run-down.


An enterprise application portfolio consist of a mix of applications, some of which are decade or more old. An enterprise ready cloud platform will offer support for legacy applications as well as new, cloud-architected ones and should provide smooth migration path for those. It is not surprising that IaaS, although not the ultimate solution for the enterprises, gained such traction recently - it is the stepping stone to the more advanced solutions like PaaS and SaaS but offers less disruptive migration path than the other ones. 


Similar to the application portfolio the list of internal systems within the enterprise can be quite long. Driving the business is the highest priority and integration with existing business systems like CRM, ERP, HR etc. and applications for those can break or seal the deal with a cloud provider.

IT teams look for easy integration with their existing infrastructure automation and monitoring tools while on the development side the cloud platform should provide easy integration with IDEs, build, test and deployment tools that are utilized in the enterprise.


This one is the one that is most discussed in the media. Privacy and security concerns are widespread and single mistake can cost lot of money for a cloud provider. Integration with existing user management systems for authentication and/or authorization, single-sign-on (SSO), encryption and data protection are must haves for the enterprises. Especially for the ones in the Financial, Insurance and Healthcare verticals.


You may have heard about this before but not all enterprises are thrilled to hear that with the cloud they remove their CapEx and convert it to OpEx. My favorite example here is the utilities companies and you can read more in my post Business Strategy for Enterprise Cloud Startups

An additional consideration is Enterprise License Agreements (ELAs) used for long time to do bulk licenses for packaged software. A cloud provider that offers easy roll-up of their services in the existing ELAs or the so-called Bring Your Own License (BYOL) will have certain advantage over ones that do not have such options.

Business Advantage

Last but not least enterprises are looking for platforms that will allow them to build for the next 10 or more years. If a cloud vendor is not able to prove its value for longer period of time its place in the enterprise will be taken by one that can. The value of the cloud is not satisfying the needs of one of the internal teams (Business, IT or Development) but of all three together.


Independent of which side of the table you sit (the vendor or the enterprise buyer) you should consider all five areas in your cloud strategy and make sure that these are well covered when the contract is signed. 

June 17, 2013

Open Source in the Cloud - How Much Should You Care?

Open-Ended-Funds-vs-Closed-Ended-Funds-300x300In his opening keynote for Red Hat Summit, Jim Whitehurst, the CEO of Red Hat asked the audience: "Name an innovation that isn't happening in Open Source - other than Azure!" I can certainly add iPhone and AWS to the mix but let me stick to the cloud topic with the following question: "How much Open Source matters in the cloud?"


Let's first elaborate on a two misconceptions about Open Source.


Open Source is Free

Not really! In the cloud doesn't matter whether you are running on an Open Source platform or not - it is NOT free because you pay for the service. And for long Open Source project have been funded through the services premiums that you pay. I would argue that Open Source vendors have mastered the way they can take profit from Open Source services and are far ahead than the proprietary vendors. The whole catch here is that you pay nothing for the software and incur no capital expenditures (CapEx) but you pay for the services (i.e. Operational Expenditures or OpEx) - remember, this is also the cloud model. Bottom line is that you may be better off with a propriatery vendor than an open source one, because the former need to yet master that business model.


Open Source Means No-Lock-In

Not sure about that too! Do you remember J2EE? It wasn't long time ago when Sun created the specification and said that there will be portability between vendors. Those of you who have tried to migrate J2EE application from JBoss to Weblogic to WebSphere will agree that the migration costs weren't negligible. It is the same with Open Source clouds - doesn't matter that HP and RackSpace both use the Open Source OpenStack - you still need to plan your migration costs.


I am far from saying that Open Source is not important. Quite opposite - I am big Open Source fan and the biggest example I can give is… well, Azure. They also understand that the source is not important anymore hence they open-sourced their SDKs (and continue to add more). It is time to forget those technology wars and really start thinking about the goals we have and the experience we provide for our customers. When you choose your cloud providers you should not ask the question: "Are they Open Source or proprietary?" Better questions to ask are:

  • Does the vendor provide functionality that will save me money?
  • Can they support my business for the next 5 or 10 years?
  • Do they provide the services and support that I need?
  • Are they agile enough to add the features that I need?
  • Do they have the feedback channel to the core development team that I can use to submit requests?
  • Do they have the vision to predict the future in the cloud?

All those are much more important questions for your technology strategy and your business than whether their cloud is Open Source or not.

March 20, 2013

Migrating Legacy Applications to the Cloud

ToolkitWith everybody jumping on the cloud computing bandwagon lately, developers and architects need to spend extra time analyzing applications that can become good candidates for migration. It is wrong to believe that every legacy application can be easily migrated from the traditional on-premise infrastructure to any cloud computing environment. Therefore such migration efforts should be approached carefully and systematically.

Let's look at couple of issues that you may face when trying to migrate legacy applications to the cloud.

Client-Server Applications

Client-server applications are characterized with tight coupling between the business logic and the data tier. Most of the times the business logic is implemented as stored procedures in the database and pulling it ut can be a substantial effort. In addition such application establish a sticky session between the client and the server, which violates common cloud architecture patterns and complicates the migration process.

The obvious approach for migrating client-server applications to the cloud is to gradually abstract the business logic in a service layer and deploy the latter to the cloud. The cleaned up data tier can still be hosted on the current infrastructure until time comes to either migrate the data or retire it. At a high level your should follow these steps:

  • Identify the business services that are exposed to the clients
  • Implement those services as a separate business layer
  • Deploy the new business layer on a cloud enabled infrastructure (either IaaS or PaaS)
  • Implement a thin client layer on top of the services (in certain cases you may be able to modify the existing clients to connect to the services instead the data tier)
  • Roll-out the new client among your users
  • Retire the business logic in the data tier

This approach provides smooth migration because it postpones the data migration, a highly critical business component, to a later stage and in the mean time the organization is gaining important knowledge and discovers potential issues with the cloud technologies.

Scheduled Tasks

Scheduled tasks or batch jobs are another legacy application pattern that can introduce some challenges when migrating to the cloud. The premise of such applications is that they are triggered either at certain intervals or by a new batch of data that gets delivered. Majority of the times the latter approach involves transfers of files between machines. Two things that are at the core of such applications contradict with the modern cloud architectural patterns:

  • The reliance on always-up machines that will trigger the execution at certain intervals
  • The reliance on always-available file system used for file exchange

Functionality that such applications provide is easily achieved through the queue-centric workflow application pattern as described by Bill Wilder in his book Cloud Architecture Patterns. However, redesigning those legacy applications to use message queues can be substantial implementation effort. Hence you should approach the migration in phases. For jobs that rely on file transfers you can use these steps:

  • Change the jobs to use cloud storage instead local file systems
  • Add functionality at the delivery side to drop a message in the queue in addition to dropping the file
  • Remove the polling functionality in the processing job and instead use the message in the queue as a triggering mechanism

For the scheduled tasks you need to change the implementation to use messages in the queue instead time intervals to trigger the tasks.

You can achieve additional benefits if you add Map-Reduce as part of your modern application design. 

Scale Up Applications

Last but not least is the type of applications that rely on additional local resources in order to handle increased loads. Such resources can be CPU speed, memory or disk storage. Unfortunately such applications are hard to migrate to the cloud unless they get redesigned to use horizontal instead vertical scaling. Most of the times such challenges are imposed at the data tier of the applications and can be solved through data-sharding.

The process for migration involves:

  • Analyzing the data and potential de-normalization
  • Identifying the shard key
  • Splitting the data amongst the shards

As bottom line the gains for the organization in the above mentioned migration approaches are:

  • Improved (and more cloud-ready) application architecture
  • Enabled economies of scale at the different tiers of the application

However the biggest benefits is the cloud computing knowledge that the organization gains throughout the process.

February 26, 2013

Evaluating Cloud Computing Uptime SLAs

Last week's Windows Azure Storage outage made me thinking how many of us evaluate the vendor's Service Level Agreement (SLA) before they decide to deploy workloads in the cloud. I bet many think about it only when it is too late. 

Let's take Windows Azure SLA and see how we as consumers of the cloud services are protected in case of downtime. Before all though I would like to point out that it is in the nature of any service (public or private) to experience outage once in a while - think about power outages that we hear about or live through every winter. It is important to understand that this will happen and as users of cloud services we need to be prepared for it. In this post I will use Windows Azure as example not because their services are better or worse than the other cloud vendors but to illustrate how the SLAs impact us and how they differ from vendor to vendor. 

Each SLA (or at least the ones that bigger cloud vendors offer) contains few main sections:

  • Definitions - defining the terms used in the document
  • Claims - describing how and under what terms one can submit a claim for incidents as well as how much you will be credited
  • Exclusions - describing in what cases the vendor is not liable for the outage
  • The actual SLAs - those can be two types:
    • Guaranteed performance characteristics of the service
    • Uptime for the service

Looking at Windows Azure SLAs web page the first thing you will notice is that there are different SLAs for each service. You don't need to read all of them unless you utilize all of the services the vendor offer. The main point here is that you need to read the SLAs for the services you use. If, for example you use Windows Azure Storage and Windows Azure Compute you will notice that the uptime for those differ by 0.05% (Compute has uptime guarantee of 99.95% while Storage has uptime guarantee of 99.90%). Although this number is negligible at first sight using an SLA calculator you will notice that the expected downtime for Storage is twice as much as the expected downtime for Compute. It is obvious that the closer the uptime is to 100% the better the service is.

The next thing that you need to keep in mind is the timeframe for which the uptime is calculated for. In the case of Windows Azure the uptime is guaranteed on a monthly basis (for both Storage and Compute). In comparison Amazon's EC2 has annual uptime guarantee. Monthly SLA guarantees are preferable because you will avoid the case where the service experiences severe outage in particular month and stays up the rest of the year. Just to illustrate the last point imagine that EC2 experiences outage of 3h in particular month and stays up for the next 11 months. This outage is less than the 99.95% guarantee or 4:22:47.99 hours acceptable downtime per year and you will not be eligible for credit for it. On the other side if the SLA guarantee is on a monthly basis you will be eligible for the maximum credit for it because it severely exceeds the 21 minutes acceptable downtime per month. 

One note about the acceptable downtime. In reality hardware in cloud data-centers fails all the time, which may result in downtime for your particular service but will not impact other services or workloads. Such outages are normally covered by the exclusion clause of the SLA and are your own responsibility. You should follow the standard architectural practices for cloud application and always make your services redundant in order to avoid this. The acceptable downtime metric is calculated for outages that impact vast amount of services or customers. Surprisingly though nowhere in the SLAs is mentioned how many customers need to be impacted in order for the vendor to report the outage. It may happen that a rack of servers in the datacenter goes down and few tens of customers are impacted for some amount of time. If you are one of those do not expect to see official statement from the cloud vendor about the outage. As a rule of thumb if the outage doesn't show up in the news you may have hard time proving that you deserve credit

The last thing to keep in mind when evaluating SLAs from big cloud providers is the Beta and trial services. It is simple - there are no SLAs for services released in Beta functionality. You are free to use them at your own risk but don't expect any guarantees for uptime from the vendor.

When the so called secondary cloud providers are concerned you need to be much more careful. Those providers (and there are a lot of them) build their services on top of the bigger cloud vendors and thus are very dependent on the uptimes from the big guys. Hence they don't publish standard SLAs but negotiate the contracts on customer-by-customer basis. Most of the time this is based on the size of business you create for them and you can rely on good terms if you are big customer. Of course they put a lot of effort in helping you design your application for redundancy and avoid the risk of executing the SLA because of primary vendor outage. In the opposite case where you are a single developer you may end up without any guarantees for uptime from smaller cloud vendors.

January 21, 2013

Essential Cloud Computing Characteristics

If you ask five different experts you will get maybe five different opinions what cloud computing is. And all five may be correct. The best definition of cloud computing that I have ever found is the National Institute of Standards and Technology Definition of Cloud Computing. According to NIST the cloud model is composed of five essential characteristics, three service models, and four deployment models. In this post I will look at the essential characteristics only, and compare to the traditional computing models; in future posts I will look at the service and deployment models. 

Because computing always implies resources (CPU, memory, storage, networking etc.), the premise of cloud is an improved way to provision, access and manage those resources. Let's look at each essential characteristic of the cloud:

On-Demand Self-Service

Essentially what this means is that you (as a consumer of the resources) can provision the resources at any time you want to, and you can do this without assistance from the resource provider

Here is an example. In the old days if your application needed additional computing power to support growing load, the process you normally used to go through is briefly as follows: call the hardware vendor and order new machines; once the hardware is received you need to install the Operating System, connect the machine to the network, configure  any firewall rules etc.; next, you need to install your application and add the machine to the pool of other machines that already handle the load for your application. This is a very simplistic view of the process but it still requires you to interact with many internal and external teams in order to complete it - those can be but are not limited to hardware vendors, IT administrators, network administrators, database administrators, operations etc. As a result it can take weeks or even months to get the hardware ready to use.

Thanks to the cloud computing though you can reduce this process to minutes. All this lengthy process comes to a click of a button or a call to the provider's API and you can have the additional resources available within minutes without. Why is this important?

Because in the past the process involved many steps and usually took months, application owners often used to over provision the environments that host their application. Of course this results in huge capital expenditures at the beginning of the project, resource underutilization throughout the project, and huge losses if the project doesn't succeed. With cloud computing though you are in control and you can provision only enough resources to support your current load.

Broad Network Access

Well, this is not something new - we've had the Internet for more than 20 years already and the cloud did not invent this. And although NIST talks that the cloud promotes the use of heterogenous clients (like smartphones, tablets etc.) I do think this would be possible even without the cloud. However there is one important thing that in my opinion  the cloud enabled that would be very hard to do with the traditional model. The cloud made it easier to bring your application closer to your users around the world. "What is the difference?", you will ask. "Isn't it that the same as Internet or the Web?" Yes and no. Thanks to the Internet you were able to make your application available to users around the world but there were significant differences in the user experience in different parts of the world. Let's say that your company is based on California and you had a very popular application with millions of users in US. Because you are based in California all servers that host your application are either in your basement or in a datacenter that is nearby so that you can easily go and fix any hardware issues that may occur. Now, think about the experience that your users will get across the country! People from East Coast will see slower response times and possibly more errors than people from the West. If you wanted to expand globally then this problems will be amplified. The way to solve this issue was to deploy servers on the East Cost and in any other part of the world that you want to expand to.

With cloud computing though you can just provision new resources in the region you want to expand to, deploy your application and start serving your users.

It again comes to the cost that you incur by deploying new data centers around the world versus just using resources on demand and releasing them if you are not successful. Because the cloud is broadly accessible you can rely on having the ability to provision resources in different parts of the world.

Resource Pooling

One can argue whether resource pooling is good or bad. The part that brings most concerns among users is the colocation of application on the same hardware or on the same virtual machine. Very often you can hear that this compromises security, can impact your application's performance and even bring it down. Those have been real concerns in the past but with the advancement in virtualization technology and the latest application runtimes you can consider them outdated. That doesn't mean that you should not think about security and performance when you design your application.

The good side of the resource pooling is that it enabled cloud providers to achieve higher application density on single hardware and much higher resource utilization (sometimes going up to 75% to 80% compared to the 10%-12% in the traditional approach). As a result of that the price for resource usage continues to fall. Another benefit of the resource pooling is that resources can easily be shifted where the demand is without the need for the customer to know where those resources come from and where are they located. Once again, as a customer you can request from the pool as many resources as you need at certain time; once you are done utilizing those you can return them to the pool so that somebody else can use them. Because you as a customer are not aware what the size of the resource pool is, your perception is that the resources are unlimited. In contrast in the traditional approach the application owners have always been constrained by the resources available on limited number of machines (i.e. the ones that they have ordered and installed in their own datacenter).

Rapid Elasticity

Elasticity is tightly related to the pooling of resources and allows you to easily expand and contract the amount of resources your application is using. The best part here is that this expansion and contraction can be automated and thus save you money when your application is under light load and doesn't need many resources.

In order to achieve this elasticity in the traditional case the process would look something like this: when the load on your application increases you need to power up more machines and add them to the pool of servers that run your application; when the load on your application decreases you start removing servers from the pool and then powering them off. Of course we all know that nobody is doing this because it is much more expensive to constantly add and remove machines from the pool and thus everybody runs the maximum number of machines all the time with very low utilization. And we all know that if the resource planning is not done right and the load on the application is so heavy that the maximum number of machines cannot handle it, the result is increase of errors, dropped request and unhappy customers.

In the cloud scenario where you can add and remove resource within minutes you don't need to spend a great deal of time doing capacity planning. You can start very small, monitor the usage of your application and add more and more resources as you grow. 

Measured Service

In order to make money the cloud providers need the ability to measure the resource usage. Because in most cases the cloud monetization is based on the pay-per-use model they need to be able to give the customers break down of how much and what resources they have used. As mentioned in the NIST definition this allows transparency for both the provider and the consumer of the service. 

The ability to measure the resource usage is important in to you, the consumer of the service, in several different ways. First, based on historical data you can budget for future growth of your application. It also allows you to better budget new projects that deliver similar applications. It is also important for application architects and developers to optimize their applications for lower resource utilization (at the end everything comes to dollars on the monthly bill).

On the other side it helps the cloud providers to better optimize their datacenter resources and provide higher density per hardware. It also helps them with the capacity planning so that they don't end up with 100% utilization and no excess capacity to cover unexpected consumer growth.

Compare this to the traditional approach where you never knew how much of your compute capacity is utilized, or how much of your network capacity is used, or how much of your storage is occupied. In rare cases companies were able to collect such statistics but almost never those have been used to provide financial benefit for the enterprise.

Having those five essential characteristics you should be able to recognize the "true" cloud offerings available on the market. In the next posts I will go over the service and deployment models for cloud computing.

September 16, 2012

What is the Difference Between Apprenda and Windows Azure?

Since I started at Apprenda one of the most common questions I hear is: "What is the difference between Apprenda and Windows Azure?". Let me take a quick stab of what both platforms offer and how you can use their features in a complementary way.

First let's look from a platform-as-a-service (or PaaS) definition point of view. As you may already know both Apprenda and Windows Azure offer PaaS funtionality but because PaaS is a really broad term and is used widely in the industry, we need to make sure we use the same criteria when we compare two offerings. Hence we try to stick to Gartner's Reference Model for PaaS that allows us to make apples-to-apples comparison between the services. If you look at the definition, PaaS is a "category of cloud services that deliver functionality provided by platform, communicaiton and integration middleware". Gartner also lists typical services offered by the PaaS so let's see how Apprenda and Windows Azure compare at those:

  • Application Servers
    Both Apprenda and Windows Azure leverage the functionality of Microsoft's .NET Framework and IIS server.
    In the case of Apprenda IIS server is used to host the front-end tier of your applications while Apprenda's prorietary WCF container is used to host any services.
    In comparison when you develop applications for Windows Azure you use the Web Role to host your application's front-end and a Worker Role to host your services. If you use Windows Azure web sites then all your front-end and business logic is hosted in IIS in a multi-tenant manner.
  • Integration Middleware
    While Windows Azure offers Service Bus in the cloud at this point of time Apprenda does not have its own ServiceBus implementation. However Apprenda applications can easily integrate with any existing Service Bus implementation on premise or in the cloud.
  • Portals and other user experience enablers
    Both Apprenda and Windows Azure offer rich user experience.
    Apprenda has the System Operations Center portal that is targeted to the platform owners, the Developer Portal that is the main application management tool for development teams, and the User Portal where end-users (or tenants) can subscribe to applications provided by development teams. Apprenda also have rich documentation available at as well as active support community at In addition when applications are deployed in a multi-tenant mode on Apprenda you are allowed to completely customize the login page, which allows for white-labeling support.
    Windows Azure on the other side offers the Management Portal available at Windows Azure management portal is targeted to the developers who use the platform to host their applications. Unlike Apprenda though and because Windows Azure is a public offering (I will come back to this later on) the management of the platform is done by Microsoft and platform management functionality is not exposed to the public. Windows Azure also offers Marketplace available at where developers can publish their and end-users can subscribe for applications and services. Extensive documentation for Windows Azure is available on their main web site at
  • Database Management Services (DBMS)
    Both platforms offer rich database management functionality.
    Apprenda leverages SQL Server to offer relational data storage functionality for applications and enables lot of features on the data tier like resource throttling, data sharding and multi-tenancy. Apprenda is working to also deliver easy integration with popular no-SQL databases on a provider basis in its next version. This will allow your applications to leverage the functionality of MongoDB, Kasandra and others as well as imporved platform support like automatic data sharding.
    Windows Azure Database is the RDBMS analogue on Azure side. Unlike Apprenda though Windows Azure Database limits the databases to certin pre-defined sizes and requires you to handle the data sharding in your application. Windows Azure Storage offers proprietary no-SQL like functionality for applications that require large sets of semi-structured data.
  • Business Process Management Technologies
    At this point of time neither Apprenda nor Windows Azure offer build-in business process management technologies. However applications on both platforms can leverage Biztalk Server and Windows Workflow Foundation for business process management.
  • Application Lifecycle Management Tools
    Both Apprenda and Windows Azure offer distinct features that help you through your application lifecycle and allow multiple versions of your application to be hosted on the platform.
    Applications deployed on Apprenda go through the following phases:
    • Definition - this phase is used during the initial development phase of the application or a version of the application
    • Sandbox - this phase is used during functional, stress or performance testing of the application or application version
    • Production - this phase is used for live applications
    • Archived - this phase is used for older application versions

    In addition Apprenda stores the binaries for each application version in the repository so that developers can easily keep track of the evolution of the application.
    If you use Windows Azure cloud services the support for application lifecycle includes the two environments that you can choose from (Staging and Production) and the convinient way to easily switch between those (a.k.a VIP-Swap) as well as the hosted version of TFS that you can use to version, build and deploy your application. If you use Windows Azure web sites you also has the opportunity to use Git for pushing your application to the cloud. Keep in mind that at the time of this writing the TFS service is in Preview mode (and hance still free) and in the future it will be offered as paid service in the cloud. 

  • Application Governance Tools (including SOA, interface governance and registry/repository)
    At the moment neither of the platforms offers central repository of services but as mentioned above there are easily integrated with Biztalk.
    Using intelligent load-balancing both platforms ensure the DNS entries for the service endpoints are kept consistent so you don't need to reconfigure your applications if any of the servers fail.
  • Messaging and Event Processing Tools
    Apprenda and Windows Azure significantly differentiate in their messaging and event processing tools.
    Apprenda offers event processing capabilities in a publish-subscribe mode. Publisher applications can send events either at application or platform level and subscriber applications can consume those. Apprenda ensures that the event is visible only at the required level (application only or cross platform) and it doesn't require any additional configuration.
    Windows Azure offers several ways for messaging. ServiceBus Queues offer first-in-first-out queueing functionality and guarantees that the message will be delivered. ServiceBus Topics offer publish-subscribe messaging functionality. Windows Azure Queues is another Windows Azure service that offers similar capabilities where you can send a message to a queue and any application that has access to the queue can process it. Whether you use ServiceBus or Windows Azure Queues though you as developer are solely responsible for ensuring the proper access restrictions to your queues in order to avoid unauthorized access. Keep in mind that all Windows Azure services are publicly available and the burden of securing those lies on you.
  • Business Intelligence Tools and Business Activity Monitoring Tools
    At this point of time both platforms have no build-in business intelligence or activity monitoring functionality.
  • Integrated Applicaiton Development and Lifecycle Tools
    Because both platforms target .NET developers you can assume good integration with Visual Studio.
    Windows Azure has a rich integration with Visual Studio that allows you to choose from different project templates, build Windows Azure deployment archives, deploy and monitor the deployment progress from within Visual Studio.
    Apprenda as well offers Visual Studio project templates for applications using different Apprenda services as well as external tool that allows you to build deployment archive by pointing it to a Visual Studio solution file. Unlike Windows Azure package format though Apprenda's deployment package is open ZIP format and has very simple folder structure, which allows you to use any ZIP tool to build the package. In the next version of Apprenda SDK you will see even better Visual Studio integration that comes at parity of what Windows Azure has to offer.
  • Integrated self-service management tools
    As mentioned above both platforms offer self-service web portals for developers. Apprenda also offers similar portals for platform owners and users as well.
    On the command-line front Apprenda offers Apprenda Command Shell (ACS) that allows developers the ability to script their build, packaging and application deployment.
    Similarly Windows Azure SDK offers a set of Power Shell scripts that connect to Windows Azure management APIs and allow you to deploy, update, scale out/scale back etc. your application.

Now, that we have looked very thoroughly through the above bullet points from Gartner's Reference Model for PaaS you may think that there are a lot of simlarities between the two platform and wonder why should you use one versus the other. Hence it is time to look at the differences in more details.

  • Public vs. Private
    One of the biggest differences between Windows Azure and Apprenda is that they both are targeting complementary areas of the cloud computing space.
    As you may already know Windows Azure is public offering hosted by Microsoft and so far there is no offering from Microsoft that enables Azure like functionality in your own datacenter (DC).
    Apprenda on the other side is a software layer that you can install on any Windows infrastructure and turns this infrastructure into a Platform as a Service. Although Apprenda is mainly targeted to private datacenters it does not prevent you from installing it on any public infrastructure like Windows Azure IaaS, Amazon AWS, Rackspace etc. Thus you can use Apprenda to enable PaaS functionality similar to the Windows Azure one either in your datacenter or on a competitive public infrastructure.
  • Shared Hardware vs Shared Container
    One other big difference between Windows Azure and Apprenda is how the platform resources are managed.
    While Windows Azure spins up new Virtual Machine (VM) for each application you deploy (thus enabling you to share the hardware among different applications) Apprenda abstracts the underlying infrastructure even more and presents it as one unified pool of resources for all applications. Thus in the Apprenda case you are not limited to the one-to-one mapping between application and VM and you can deploy multiple applications on the same VM or even bare metal. The shared container approach that Apprenda uses allows for much better resource utilization, higher application density and true multi-tenancy then the app-to-VM one.
    One note that I need to add here is that with the introduction of Windows Azure web sites you can argue that Windows Azure also uses the shared container approach to increase the applicaiton density. Howeve Windows Azure web sites is strictly constraned to applications that run in IIS while Apprenda enables this functionality throughout all applicaiton tiers including services and data.
  • Legacy vs. New Applications
    One of the biggest complaints in the early days of Windows Azure was the support for legacy applications and the ability to migrate those to the cloud. Ever since Microsoft is trying to add functionality that will make the migration of such applications easier. Things significantly improved with the introduction of Windows Azure Infrastructure-as-a-Service (IaaS) but on the PaaS front Azure is till behind as you need to modify your application code in order to run it in Azure Web or Worker role.
    Migrating legacy application to Apprenda on the other side is much easier and in the majority of the cases the only thing you need to do is to repackage the binaries into an Apprenda archive and deploy them to the platform. As added bonus you get free support for authentication and authorization (AutH/AutZ) and multi-tenancy even if your application wasn't developed with those functionalities in mind.
  • Billing Support
    The last comparison point I want to touch on is the billing support on both platforms.
    As you may be aware ISVs are having hard time implementing different billing methods on Windows Azure because there are no good ways to tap into the billing infrastructure of the platform - there are no standard APIs exposed and the lag for processing billing data is significant (24h normally)
    Apprenda in contrast is implemented with the ISVs in mind and offers rich billing support that allows you to implement charge backs on functionality level (think API calls) as well as on resource level (either allocated or consumed). This allows developers to implement different monetization methods in their applications - like charging per feature, charging per user or per CPU usage for example (the latter is similar to Google AppEngine).

By now you should have very good understanding of the similarities and differences between Windows Azure and Apprenda. I bet that you already have good idea where can you use one versus the other. However I would like to throw at you few ideas where you can use both together to get the best of both in your advantage. Here are couple of use cases that you may find useful in your arsenal of solutions:

  • Burst Into the Cloud
    With the recent introduction of Windows Azure IaaS and Windows Azure Virtual Network (both still in Beta) you are not anymore limited to the capacity of your private datacenter. If you add Apprenda into the mix you can create unified PaaS layer on top of hybrid infrastructure and allow your applications to burst into the cloud when demand increases and scale back when it decreases. 
    There are several benefits you get from this.
    First, your development teams don't need to implement special code in their applications that runs conditional on where the applicaiton is deployed (in the private DC or in the cloud). They continue to develop the applicaitons as they are deployed on a stand-alone server, then they use Apprenda to abstract the applications from the underlying infrastructure.
    Second, the IT personel can dynamically manage the infrastructure and add capacity without the need to procurr new hardware. Thus they are not the bottleneck for applicaitons anymore and become enabler for faster time-to-market.
  • Data Sovereigncy
    For lot of organizations putting data in the public cloud is still out of questions. Hospitals, pharaceutical companies, banks and other financial institutions need to follow cetrain regulatory guidelines to ensure the sensitive data of their customers is well protected. However such organizations still want to benefit from the cloud. Thus using Apprenda as PaaS layer spnning your datacenter and Windows Azure IaaS you can ensure that the data tier is kept in your own datacenter while the services and front-end can scale into the cloud.
  • Easy and Smooth Migration of Legacy Apps to the Cloud
    With the build-in support for legacy applications Apprenda is a key stepping stone into the migration of those applications to Windows Azure. Using hybrid infrastructure (your own DC plus Windows Azure IaaS) with Apprenda PaaS layer on top you can leverage the benefits of the cloud for applications that will need substantial re-implementation in order to run on Azure.
  • Achieve True Vendor Independence
    The last but not least is that by abstracting your applications from your infrastructure with Apprenda's help you can achieve true independence from your public cloud provider. You can easily move applications between your own datacenter, Windows Azure, AWS, Rackspace and any other provider that offer Windows hosting. Even better, you are able to easily load ballance between instances on any of those cloud providers and ensure that if one has major failure your application continues to run uninterrupted.

I am pretty sure this post doesn't evaluate all possible features and capabilities of both platforms but I hope it gives you enough understanding of the basic differences of the platforms and how you can use them together. Having in mind that Apprenda is a close partner of Microsoft we are working to bring both platforms together. As always questions, feedback and your thoughts are highly appreciated.