Cloud Computing

June 10, 2014

No! You Don't Want Automatic DR!

X5_gear_leverIt is quite often that you will hear IT people say that they want the ultimate automatic disaster recovery solution that you can buy. You can also find some vendors who can sell you their solution as automatic disaster recovery solution only because you asked for one. But do you really want an automatic disaster recovery solution?

We are often victims of our bad understanding of the words but in technology you need to be very careful what you ask for. If you ask for automatic disaster recovery solution you may get something that you do not expect. Here is the scenario.

An automatic disaster recovery solution will be a feature rich solution that is able to recognize all kinds of "disaster" symptoms like bad weather, increased humidity around the data center, low power voltage or increased traffic, and it will trigger automatically the disaster recovery plan. There are a few problems with that though. Executing the DR plan is not as simple as flipping the light switch and turning the light. The implications are much bigger:

  • You need to redirect all your users to another data center that may be few hundred or even thousand miles away
  • If you use cold DR strategy your second center may require some time to become live
  • The data may not be current in your second data center

All this will have significant impact on your users' experience, which in the case of a real disaster may be warranted but if the "disaster" is just fluctuation in the voltage or stronger wind it will have more of a negative impact.

Some of you may argue that you can develop a very smart decision engine that will be able to determine whether this is real disaster symptom or a fake one but I think those people live in the future.

Therefore you should not look for automatic disaster recovery solution, and striving to achieve one should not be in your goals. What you need is automated solution that automates your DR run book but you still leave the control in the hands of humans who are able to determine properly whether the symptoms are or will lead to disaster.

This post was first published on our company's blog as No! You Don't Want Automatic DR!

June 04, 2014

What are good RTO and RPO?

No_u_turn_signExperiencing downtime is not something that companies wish for but as we have seen lately it is something that we hear quite often about. Interestingly enough very few enterprises, especially in the Small and Medium Business area, spent enough time to work out good procedures for recovery of their IT systems and applications. The recovery procedures should always be driven by the business needs, and this is the part where lot of IT departments are failing and as a result the recovery turns out to be reactive procedure that is triggered by the issue, results in a chaotic recovery activities and ends up with post-mortem but no improvements after that. Putting more initial thought into the Business Impact Analysis (BIA) is a prerequisite for a good recovery procedures and defining the two main characteristics - RTO and RPO are crucial part of this process.

Let's start with the first one - Recovery Time Objective (RTO). RTO is defined as the duration of time within which the system or the service must be restored after disruption in order to avoid unacceptable consequences related to break in business continuity. The first thing that you need to have in mind about RTO is that it is an objective - this means that it is a target that you may not be able to achieve all the time. There are certain activities that you need to do during this time that may have variable duration. At a high level those are grouped in:

  1. Recognizing that there is a disruption - this may depend on your level of monitoring or lack of it and may involve manual checking of each system or service that participates in the business process
  2. Troubleshooting and identifying the failing system and/or service - this will depend on the level of diagnostics you have implemented and may also involve different people or teams
  3. Fixing the issue - depending on the root cause this can be as simple as rebooting the system to as complex as requiting code changes or even ordering new hardware
  4. Testing the fix - last but not least you need to make sure that the fix actually resolves the issue

In all those four activities the human factor is the most variable part. People need to be notified, updated, they need time to understand the issue, troubleshoot, code etc. The more automation you provide the less impactful the human factor is for the recovery time.

Once the system or services is brought back to operation though you need to determine what is the state of the data. This is where the next characteristic becomes important - Recovery Point Objective (RPO). RPO is defined as the period in which data might be lost from the system due to disruption without major impact to the business continuity. Although this is also objective you need to be more careful with this one. There are few things to think about here:

  1. Is data loss acceptable at all? In lot of cases the answer is no but there are situations in which you can tolerate loss of data.
  2. How to recover the data? Does it require copying, shipping backup tapes or manual entry of the data?
  3. How long will it take to recover the data? Two extremes are from few seconds required to repoint the system to a replica of the data on another server to requesting an off-site backup copy of the data
  4. How to test that the data is recovered? This can vary from automated tests to manual tests

Depending on your RPO your time to recover the business operations for your system may vary.

When thinking about Business Continuity (BC) you need to think about both components - recovering the operation of the system or service (RTO) and recovering the data to a point at which it is usable for the business (RTO). Both those actions need to take time that is less than the Maximum Tolerable Downtime (MTD) as we defined it in Determining the Cost of Downtime. In general though you should set your RTO and RPO in a way that you have a buffer of time for unexpected issues that may occur during recovery.

This post was first published on our company's blog.

October 28, 2013

What exactly is a Service?

ServicebuttonWith the advancement in cloud technologies more and more companies are getting on the Anything-as-a-Service train but over the years the term services became so overloaded that people are having hard time understanding what it means. As any other technology term you hear lately some clarification may be required to understand what the person in front of you meant with "I sell services".

According to Wikipedia's definition of service (as a system architecture component) it is a set of related software functionalities that can be reused for different purposes, together with the policies that should control its usage. In today's cloud environment I would add two more things to the services definition:

  • Those functionalities must be exposed either through interoperable APIs or accessible via browser (i.e. must not be bound to a particular implementation platform)
  • And they must be accessible over the network (i.e. can be accessed remotely)
Although those characteristics should be enough to define what a service is, we really complicate the matter by thinking that everything that can be accessed over the network is a service. Well, for decades we've been accessing databases over the network - is it true to say that traditional databases are services? Comparing with the definition above the answer is "yes": it can be used for storing data for different purposes, one can use ODBC to access it from various platforms and languages and it is accessible over the network. Does that mean that by running my single instance DB on my home computer makes me Database-as-a-Service (DBaaS) provider? Not really! Here are few more things that we need to consider when we talk about services:
  • Services are normally exposed to the "external" world. What this means is that you offer the services outside your organization. Whether this is outside your team, your department or your company it is up to you but you should consider services the offering that generates business value for your organization.
  • There are also multi-tenant - this means that the services you offer can be consumed by multiple external entities at the same time without any modifications.
  • They are always up - third party businesses will depend on your services and you cannot afford to fail them hence avoiding single point of failure is crucial for the success of services
  • Last but not least services must be adopted - if you do not drive adoption through evangelizing, partnerships, good documentation, SDKs etc. the services you offer will not add value for your organization

Transitioning from a traditional software product organization to a services organization requires lot of effort and cultural change, and the best way to approach it is to clearly define the basics from the beginning. 

August 07, 2013

How IT Pros Are Destroying Their Own Jobs

Win-Lose-DiceNot surprising to me my yesterday's post Why Shutting Down TechNet is Not a Problem for IT Pros sparked quite passioned comments from IT Pros. I have to admit - I was wrong! Shutting down TechNet is a problem for IT Pros! But not because they will lose the ability to install software for free but because the hater that this event sparked among them hinders their abilities to look beyond this immediate issue and consider the change they need to make.

From the comments 30% contained rage against the cloud, 30% against the developer and 30% described how much IT Pros' job is to install, maintain and troubleshoot servers and environments or plainly said how much they love to hug their machines. The remaining 10% were valid concerns that can be summarized under 1.) cloud environments are hard to configure and 2.) Microsoft is acting rude.

Let me first address the 10%!

Cloud computing as a concept is not new except maybe the name. However the software automation that cloud environments achieve was not available several years ago. Bringing up a full application stack required several hours if not days for the IT Pro in the past while now it is available with the click of a button. Whether the cloud will live to its promise or not only the time will show but one thing is for sure - more and more automation will be added, which will require less and less need to perform the current admin tasks.

Regarding Microsoft and whether they've been acting rude this should not surprise anybody. They (still) have power and they decided to exercise it. I will emphasize once again though - I don't think shutting down TechNet is such a big problem! There are other ways to get Microsoft software for evaluation (note: not production usage) and I truly believe that if Microsoft wants to stay relevant and if they are true to their "devices and services" strategy they need to make their software affordable for evaluation. If not, as one of the comments said - there is always Linux.

The remaining 90% though are the ones that worry me about the IT Pros. For people who will always be on the liabilities side of the balance sheet they should pioneer the cloud and not blindly claim its uselessness. Throughout the comments I noticed that certain professionals do not even clearly understand the basic cloud concepts (like public and private) and what scenarios those can enable. There are numerous examples where IT organizations embrace the cloud and not only keep their jobs but become the Achilles heel of the enterprise.

Which brings me to the second point - the hate against developers and in this capacity the Lines of Business (LOBs). Everybody who works in a company that has at least one IT guy is aware of the tensions between IT and the "others". And being realistic if the IT Pro needs to serve several masters (developers, users and maybe customers) and if it takes weeks if not months to gets servers provisioned neither him nor the "others" will be happy. The solution for the IT Pro guy is to become more nimble, more agile. Partnering up with the business groups and development teams instead complaining will bring them more success and fame.

For the remaining 30%, the people who want to hug their servers my only advice is to let it go. Unless you feel weird satisfaction by installing the same software again and again you need to move on and start bringing value to the table in the form of fast and flexible solutions.

As I mentioned in my previous article - it is time for IT Pros to change unless they want to become extinct.

August 06, 2013

Why Shutting Down TechNet is Not a Problem for IT Pros?

Change-sameWhile reading the news yesterday I stumbled upon the following article in Puget Sound Business Journal - Why is Microsoft alienating its biggest customers? IT pros want TechNet back. Everybody has the right to complain and sign a petition but more important is to understand the message Microsoft sends. Some think of it as "Microsoft doesn't care about IT Pros anymore", and they may be right; but the message sounds to me more like "Hey, IT Pros - the world is changing!" Although I think Microsoft could be a little bit more responsive to the complaints, I don't think IT Pros should be so worried. Here is why.

The Problem With The Downloads


While $349 annually for the whole collection of Microsoft software is a very attractive price I think free software is a better option. Although slow, Microsoft showed its commitment to change in the last few years. Although I don't think that Microsoft will ever release Windows (client or server) under Apache license they will continue to provide Beta versions for evaluation for free. 
Next, the price Microsoft charges for software will continue to get lower. Just compare how much you paid for Windows 7 license and how much you paid for Windows 8 license - quite significant difference. I do expect the same to happen to other products that are in the consumer category (Office at least).
Last, if you still insist to have unlimited downloads of everything Microsoft then you can subscribe for MSDN. Yes, it is a few hundred dollars more annually but you also get more value from it and… wait! you can now claim yourself as a developer!

The Problem With The Installations


I will admit that I do install software for evaluation quite often. And I have to admit that I hate it! Installing and configuring of software is a huge waste of time if your end goal is to see whether it will work or not. I would rather click a button and have everything I need running in few minutes without the need to download/install/configure. And this is one of the promises of the cloud - you can get the software you need up and running in minutes, do your testing and move on. Well, it may cost few bucks to run it for a day but it is not such a big deal. And, who knows - Microsoft may decide to offer free compute time for evaluation purposes. 

The Problem With The IT Pros


The biggest problem I think though is the IT Pros themselves. They still look at their jobs and responsibilities as the people who install software. It is time for IT Pros to understand that in is near the day when software will install the software, and they need to think how to position themselves in this environment. The best option for them is to work closely with the Business Groups and provide the IT services needed to support the business or to transition to a DevOps role that again will provide value for the business.

It is clear that Microsoft understands that the world is changing and the IT as it used to be is nearing its end. It is time also for the IT Pros to understand that just installing software is not a value proposition in the enterprise.

July 08, 2013

Is the Cloud the Right Solution for Your App?

Image51It is surprising to see how every new technology suddenly becomes the thing that will solve every problem. The cloud is no different. Everybody rushes to migrate their applications to the cloud because they think this will magically make them faster, cheaper, agile, competitive and… add any other buzzword that comes to mind. Well, not so fast! You don't need to move every single application to the cloud! Or at lest not in its current state.

 

There are thousands of articles on Internet that discuss what applications are applicable for the cloud (including several that I have written) and what are not. There are also thousands of articles discussing the different cloud technologies and how they can be used. Unfortunately what I have seen in the last four years is that people don't pay attention to those writings and choose applications that are not applicable for the cloud or severely misuse the technology. 

 

Take for example the following scenario. The IT team at a very large enterprise creates virtual machines on AWS for each analyst, installs desktop software that uses local database (file based) and let them use those at their discretion. This is certainly enough to have a checkbox next to "Migrated application to the cloud" on your performance review but is this the solution? First, let's look at what the problems with this "migration" are:

  • Because the VM is used by a single user it's utilization will be no more than 8h a day or 30%. And this is only if this analyst doesn't take any breaks, which means that the utilization of the VM will be much lower. The goal of the cloud is to maximize the utilization of your infrastructure and improve the efficiency and I am not talking from cloud vendor point of view.
  • If you make a simple calculation you will find out that throughout the year they will pay Amazon amount that equals to pretty decent notebook. But wait, the analyst already has a notebook! And she or he uses this notebook to connect to the VM. Hence they are paying double for something that they already have - why don't use the analysts notebook to install the software there. (I know there are certain reasons why you don't want to do that but just bare with me).
  • The user experience will be suboptimal when Amazon decides to move your VM from one node to another. This will result in interruptions, possible loss of data and low satisfaction from the analysts.
  • Although they can create VM template managing all those VMs is a tedious task. You need to go patch them, update the template, troubleshoot them etc. that will increase their management costs.
  • The part that mostly bothered me in this scenario was that the technology for solving this problem has been available for long time - it is called Terminal Services and there are few established vendors that made business of that long before the cloud term was even conceived.

 

Now, if you really want to make an impact in your organization you should approach the problem differently. The  questions that you need to ask are: 

  • How can I increase utilization?
  • How can I decrease spending?
  • How can I allow centralized management?

 

Implementing Terminal Services in your organization may be the simplest answer. As mentioned the technology is available for long time, it is proven and it is easy to implement. If you want to show that you are up to date with the latest developments and mention cloud on your resume look at VDI (or Virtual Desktop Infrastructure). Last but not least if you want to prepare your application for the future consider reworking it into a modern web application that uses central database, services and multi-tenant UI.

 

As it seems the cloud can be solution for your app but you will need to put the effort to find the right way.

June 24, 2013

Is Your Cloud Ready For The Enterprise?

Ready-set-goReading my newsfeed this morning I noticed several articles talking about the cloud and the enterprise. There is no doubt that the area is heating up with more and more acquisitions (IBM buys Softlayer), investments (GE invests $105M in Pivotal) and fights over big deals (IBM vs. AWS for the CIA cloud) but the question that comes up is: "Are the cloud platforms ready for the enterprise?"

 

Being involved with numerous cloud projects I see five areas that enterprises emphasize when they evaluate their options. Those are not too different from the criteria they use for any other software offering but here is the cloud run-down.

Migration

An enterprise application portfolio consist of a mix of applications, some of which are decade or more old. An enterprise ready cloud platform will offer support for legacy applications as well as new, cloud-architected ones and should provide smooth migration path for those. It is not surprising that IaaS, although not the ultimate solution for the enterprises, gained such traction recently - it is the stepping stone to the more advanced solutions like PaaS and SaaS but offers less disruptive migration path than the other ones. 

Integration

Similar to the application portfolio the list of internal systems within the enterprise can be quite long. Driving the business is the highest priority and integration with existing business systems like CRM, ERP, HR etc. and applications for those can break or seal the deal with a cloud provider.

IT teams look for easy integration with their existing infrastructure automation and monitoring tools while on the development side the cloud platform should provide easy integration with IDEs, build, test and deployment tools that are utilized in the enterprise.

Security

This one is the one that is most discussed in the media. Privacy and security concerns are widespread and single mistake can cost lot of money for a cloud provider. Integration with existing user management systems for authentication and/or authorization, single-sign-on (SSO), encryption and data protection are must haves for the enterprises. Especially for the ones in the Financial, Insurance and Healthcare verticals.

Licensing

You may have heard about this before but not all enterprises are thrilled to hear that with the cloud they remove their CapEx and convert it to OpEx. My favorite example here is the utilities companies and you can read more in my post Business Strategy for Enterprise Cloud Startups

An additional consideration is Enterprise License Agreements (ELAs) used for long time to do bulk licenses for packaged software. A cloud provider that offers easy roll-up of their services in the existing ELAs or the so-called Bring Your Own License (BYOL) will have certain advantage over ones that do not have such options.

Business Advantage

Last but not least enterprises are looking for platforms that will allow them to build for the next 10 or more years. If a cloud vendor is not able to prove its value for longer period of time its place in the enterprise will be taken by one that can. The value of the cloud is not satisfying the needs of one of the internal teams (Business, IT or Development) but of all three together.

 

Independent of which side of the table you sit (the vendor or the enterprise buyer) you should consider all five areas in your cloud strategy and make sure that these are well covered when the contract is signed. 

June 17, 2013

Open Source in the Cloud - How Much Should You Care?

Open-Ended-Funds-vs-Closed-Ended-Funds-300x300In his opening keynote for Red Hat Summit, Jim Whitehurst, the CEO of Red Hat asked the audience: "Name an innovation that isn't happening in Open Source - other than Azure!" I can certainly add iPhone and AWS to the mix but let me stick to the cloud topic with the following question: "How much Open Source matters in the cloud?"

 

Let's first elaborate on a two misconceptions about Open Source.

 

Open Source is Free

Not really! In the cloud doesn't matter whether you are running on an Open Source platform or not - it is NOT free because you pay for the service. And for long Open Source project have been funded through the services premiums that you pay. I would argue that Open Source vendors have mastered the way they can take profit from Open Source services and are far ahead than the proprietary vendors. The whole catch here is that you pay nothing for the software and incur no capital expenditures (CapEx) but you pay for the services (i.e. Operational Expenditures or OpEx) - remember, this is also the cloud model. Bottom line is that you may be better off with a propriatery vendor than an open source one, because the former need to yet master that business model.

 

Open Source Means No-Lock-In

Not sure about that too! Do you remember J2EE? It wasn't long time ago when Sun created the specification and said that there will be portability between vendors. Those of you who have tried to migrate J2EE application from JBoss to Weblogic to WebSphere will agree that the migration costs weren't negligible. It is the same with Open Source clouds - doesn't matter that HP and RackSpace both use the Open Source OpenStack - you still need to plan your migration costs.

 

I am far from saying that Open Source is not important. Quite opposite - I am big Open Source fan and the biggest example I can give is… well, Azure. They also understand that the source is not important anymore hence they open-sourced their SDKs (and continue to add more). It is time to forget those technology wars and really start thinking about the goals we have and the experience we provide for our customers. When you choose your cloud providers you should not ask the question: "Are they Open Source or proprietary?" Better questions to ask are:

  • Does the vendor provide functionality that will save me money?
  • Can they support my business for the next 5 or 10 years?
  • Do they provide the services and support that I need?
  • Are they agile enough to add the features that I need?
  • Do they have the feedback channel to the core development team that I can use to submit requests?
  • Do they have the vision to predict the future in the cloud?

All those are much more important questions for your technology strategy and your business than whether their cloud is Open Source or not.

June 10, 2013

10 Dos and Don'ts for Running Proof-of-Concept Projects in the Cloud

DosdontsWith cloud computing increasing its popularity, more and more enterprise IT and development teams are looking to run proof-of-concept projects. Very often though such projects do not deliver results as expected and project managers come back to the leadership teams with either: "We are not ready for the cloud!" or "It will be too expensive to move our applications to the cloud!" However the problem doesn't necessary lie in the cloud or the application portfolio. Most of the times it is in the way the project is scoped and managed. 

 

Here are few DOs and DON'Ts for managing proof-of-concept projects in the cloud.

 

DOs

  • Make a decision what type of cloud options you will evaluate - from a service model point of view and from deployment point of view. If you are going to evaluate private PaaS options try to compare those only to private PaaS providers. Using the well known cliche compare only apples to apples.
  • Clearly define the terminology for the whole project team. For example something that one company calls PaaS may be considered IaaS or just a stack automation by another vendor. You need to have your own definition of PaaS, IaaS, stack automation tools and any other terminology that you will use.
  • Choose only vendors that fit your definition. After you clearly define and socialize what you will be evaluating you need to choose vendors that offer solutions that fit you your definition. 
  • Select application that will be easily migrated to the cloud. Quite often teams select their most complex application but the problem with this is that such applications are very often implemented with legacy technologies and most of the time gets spend on re-architecting the application instead learning what the cloud has to offer.
  • Set clear goals and timeframe for the PoC. You need to be clear what problems you want the cloud to solve for you. Whether it is agility and time-to-market, or efficiency and ease of infrastructure management you need to get the whole project team to agree upon. Next, make sure the project is time-boxed and properly managed to come up with meaningful results.

 

DON'Ts

  • Don not rule out particular technology because you are not familiar with it or it is too new. One of the goals of the PoC should be to become familiar with the technology. In addition new technology may solve more problems for you than you have initially anticipated.
  • Do not select too many vendors. Choose only the best 2 or 3 vendors in the category you want to evaluate else you may fall into analysis-paralysis and not be able to choose among the variety of options you have. In addition the PoC will run longer the more vendors you have.
  • Do not migrate multiple applications as part of the PoC. Migrating one application should be enough for you to learn what the effort is and become familiar with the technology or the vendor. Migrating more than one application is investment that you may not stick with after the PoC.
  • Do not extend the length of the PoC. Even if you think that you will be able to migrate your complex application to the cloud if you extend the PoC with another month it is better to cut the scope instead. The reasons are that 1. you have already learned that you will need more time to migrate your applications to the cloud and 2. extending the PoC postpones the decision and moves you further away from the ultimate goal to get everybody on-board.
  • Last but not least do not make decision that is heavily tailored to the needs of one team. If the PoC is a cross-team effort (IT, DEV, Business) then all three teams should have equal saying on the technology. If one of the teams targets additional goals they can evaluate technology that easily integrates with the chosen one and offers complementary benefits.

 

Making a technology decision has always been tough choice hence having a structured approach to the problem can help you make the decisions faster and last longer.

June 03, 2013

Multi-Tenancy Options in Cloud Environments

Condominium-design-ideasOne of the biggest benefits the cloud offers is the ability to colocate customers and applications on the same hardware in order to improve the efficiency through resource utilization. This type of colocation is referred as multi-tenancy however the term becomes overloaded and it is crucial to understand the different types of multi-tenancy out there. This is especially important when you are building your private cloud because your goals may differ from those of the public cloud providers who try to satisfy the requirements of a much broader audience.

 

Now, let's look at the different options that are available.


Multi-Tenancy at Infrastructure Level

One of the most common approaches of multi-tenancy is the one implemented at the infrastructure level. This is the widely popular IaaS (Infrastructure-as-a-Service) approach, where you can host multiple customers and/or applications on the same hardware by using separate virtual machine for each. The benefits of this approach are:

  • Maturity - virtualization has been used for a while already and the technology and the tooling available is pretty advanced
  • Easy to implement - there are many out of the box products available on the market that can get you up and running pretty fast
  • Legacy app support - you will be able to run legacy apps that are not cloud-enabled with little migration effort

However there are quite some disadvantages of this approach that you need to take into account:

  • Decreased infrastructure efficiency - in order to run multiple VMs on the hardware you need to 1. use a hypervisor to run those and 2. install separate kernel on the guest VM; both of those use part of the resources on the machine for their own needs leaving less for your application
  • Increased license costs - the hypervisor and the guest operating systems may require additional licenses, which increases your capital expenditures
  • Higher maintenance costs - with the sprawl of VMs that you have you will need more time to update, patch and troubleshoot your environment
  • Developer unfriendly - although it solves the machine provisioning problem it may not solve the application deployment and maintenance problems and it will continue to impact your time-to-market. One note here is that there are quite a few tools available on the market that you can use for application provisioning automation however their integration with the underlying VM management software is still not mature enough

Multi-Tenancy at OS Level 

The next option you can choose from is to use application containers for multi-tenancy. This is more advanced approach where you use containers that run on the same operating system and ensure access only to resources allowed for the application. There are several benefits to this approach compared to the IaaS one:

  • Higher density - because you don't need to run a hypervisor and separate kernels for your application you can deploy more useful workloads on the machine compared to the IaaS approach; the overhead for running the container is much smaller
  • Lower licensing and maintenance costs - there isn't anymore the need to pay for hypervisor and guest OS licenses; the license cost for the container management software is comparable to the license cost for IaaS management software that does not include hypervisor and guest OS licenses
  • Developer friendly - because the containers are specialized pieces of software that target specific types of applications they already come with a complete application stack (like J2EE, IIS/.NET etc.) and application deployment support
  • Application Standardization - because the platform itself takes care of the application stack build-up you can achieve high level of standardization between applications; in addition the platform may offer standard services that can be used by each application

Some of the disadvantages of the containers approach are:

  • Limited legacy app support - containers are well-suited for deployment of applications that are developed with service-oriented approach in mind (SOA); legacy applications that assume certain machine or OS dependencies may require significant efforts to migrate
  • Maturity - the containers approach is new compared to the virtualization one however it is picking up speed fast and you can expect to see more in the coming months and years; the tools support and the integration with the underlying infrastructure can also be limited

Multi-Tenancy at Application Level

Last but not least is the approach where you implement multi-tenancy in the application itself. Although this is the approach where you will achieve the highest density of your infrastructure there are certain disadvantages:

  • Very costly - in addition to the actual functionality of the application it needs to also be instrumented for resource management, which can become a significant work item
  • No standardization - each multi-tenant application ends up implemented differently because there are no standard infrastructure services that can be used
  • High maintenance costs - because each application has a different approach to implement the resource management the maintenance costs grow with each new application

Having good understanding of the multi-tenancy options is crucial when you make a decision for your private implementation. Weighting out the options and getting feedback from various stakeholders in your enterprise - developers, operations, business - will help you make the best choice for your cloud strategy.