Market of Data Centers in India is an increasingly broad niche and approaching the world level and the technology used, and for the service, and standards to be everywhere in this industry.
The new standard, developed by respected among data center is “Tier Standard for Operational Sustainability“. This standard describes the requirements to guarantee the sustainability of data centers, as well as minimize the associated risks. As you know, prior to widespread standard Tier Standard: Topology regulate the technical parameters of the data center to achieve a certain level of reliability.
Feature of the new standard is that it takes the human factor into account for the stability of the data centers. And it is of great importance, as the percentage of errors in work related factors are as high as 70% of them are little more than 40% are associated with errors Governing Maintenance Service.
To minimize these errors, it is necessary to conduct purposeful work with staff to improve their qualifications, to carry out actions to retain qualified staff. Also in accordance with the standard for sustainability of data centers are important factors such as location of data center, its distance from the intersection of transportation routes – airports, railway stations, highways, especially the building, which houses the data processing center (DPC) and the level of its efficiency.
To improve efficiency, it is expedient to rely on the coefficient PUE (power usage effectiveness), which is calculated by dividing the total capacity of all equipment at the data center power used by IT equipment. Technology to help increase energy efficiency – increasing temperature operation of equipment, consolidation, virtualization, thorough selection of dedicated servers hardware for specific tasks in the data center.
These technologies are mainly applied in designing and building data centers, and for the already functioning provides methods such as replacement of the UPS to the new, high-efficiency, insulation of hot and cold air currents, temperature control and air flow, moving closer to the cooling systems to the point of maximum load. The average payback period of these activities ranges from 2 to 5 years.
What are the main challenges for putting ideas into practice with cloud computing services?
Some of the questions that were discussed are, in general, common to most companies and so we’ll share them here.
Adopting cloud computing is not a simple matter of implementing a virtualization system. Indeed it is still much simplified, with companies implementing virtualization and claiming that they are deploying cloud computing. The formula for cloud computing is virtualization + standardization + automation. Virtualization is only the first step of the journey.
Some of the challenges that IT managers will face in this journey towards cloud computing involves everything from the pace, intensity and extent of the use of cloud to changes in the structure and organization of IT, with new skills and responsibilities, changes in governance models and setting budgets, and relationships with internal customers and external providers.
The first step is to consider cloud as IT strategy, as it was the shift from centralized environment for client-server for about 15 years ago. Companies that ignore the client-server model and insisted to stay too long in the centralized environment, had to run after the injury. Lessons Learned. So the attitude of facing the IT cloud computing should be proactive and not reactive.
IT must identify where the adoption of cloud may bring bigger and faster benefits to the company, including thinking “out-of-the-box“, creating new opportunities for generating revenue for the organization. So IT must be clearly identified with the demands of business and the potential of cloud computing.
But ideas to be implemented need a financial boost. The adoption of cloud must be substantiated by a detailed analysis and ROI (return on investment), TCO (total cost of ownership) and the value of opportunities for innovative products or processes. It should be clear that when we talk about cloud computing we are talking about various cloud services as IaaS, PaaS and SaaS, which are in different development stages.
SaaS is already in operation for quite some time, but PaaS is taking its first steps now. This often means that the costs and risks of pioneering studies should be embedded in for adoption.
The pace and extent of adoption of cloud in the company depends heavily on its culture of innovation and risk. Companies that are more averse to risk should begin with cloud services as SaaS applications offer the least risk to the business.
The dissemination strategy can and should be gradual.
By December 2011 – all newly planned or performing major IT acquisitions must complete an ASSETS alternatives That includes analysis based cloud computing the alternative part of Their budget submissions.
By December 2012 – all IT ASSETS making enhancements to an existing investment must complete an alternatives analysis That includes cloud computing based alternative to part of Their budget submissions.
By December 2013 – IT Investments in all steady-state must complete an alternatives analysis That includes cloud computing based alternative to part of Their budget submissions.
I have not done any formal surveys, but generally there is a consensus that 10% to 40% of the applications that run on-premise companies can migrate to the cloud in a short time. The role of IT can identify which of these applications make sense to migrate to the cloud.
Finally, there is still a challenge that is not being given due weight: the monitoring and management of resources in the cloud. Cloud computing significantly changes the way of IT resources which can be acquired and used. In general, at least in the coming years we will see hybrid environments in enterprises with on-premise systems running on dedicated servers, some running into clouds in other private and public cloud hosting services.
Manage, monitor and ensure interoperability of this complex environment is not and will not be a simple task. The IT department will gradually need to act as asset manager to manage relationships with cloud providers. New skills and roles must be purchased as cloud services architects. The governance models should be adjusted to the flexibility, resilience and self-service features of the cloud.
The summary of the opera is that the transition to the cloud will require much planning. In retrospect, if we look at the data center that kept so centralized systems and compare them with today’s data centers that run Web systems and client-server to hundreds or thousands of servers, we will see huge differences. And the differences between these data centers today and the morning clouds will be equal to or greater.
Therefore, we must start to plan this new scenario from today.
But have you ever stopped to think about the content that will generate and disseminate these spaces?
Certainly the answer is yes, but the question arises: how to do it?
In many corporate profiles, we see only promotions related products and services offered by the company, or the performance of a CAS online.
Social media, like traditional media, should convey information relevant to its readers. Never forget that besides being the voice of the company about what it makes and sells and to relate to consumers, the profiles digital work with the corporate brand loyalty can become a reference content.
Here are some tips on how to achieve these results:
The first thing to do when creating a corporate profile on any social networking site – Twitter, Facebook, Linkedin, YouTube, etc., is to understand your audience. If your company has some research for public, the way out is to look into it. If not this ally, try to do the research yourself. Talk to the directors and employees who have more time for homework, search the major search engines and social media what is being said about your business and the market for its performance.
Once you understand your audience, consider how the competition is positioning itself in the online space. Make a comparison between the marks taking into account the frequency of updates, generating the content itself, suggested content from other sites, interaction, language used and partnerships.
Managing your own content, show that you have expertise. Create a corporate blog to explore this channel with information about the company, its release and clarification; make cultural promotions and contests with their readers, incorporating the blog with social media, including the blog in its strategy of sponsored links, do a survey of bloggers and opinion that may be its partners in the dissemination of content.
Disclose proprietary information on company networks and provide interesting content from other sources.
Maybe, you will not receive good days of its followers in the first week you’re in social media, but over time they will see that the brand is present and begin to interact. So follow Friday (# FF) on Twitter all on Fridays, photos and short subjects that find relevant on Facebook and interact in the Linkedin groups discussions. Show that the company is mature enough to take criticism and who has their own opinion.
Respond publicly to questions and complaints. Not good to answer by sending private messages, either delete the complaints posted on the notice board of the company on Facebook, Linkedin communities or in a blog comment. If the consumer is extremely dissatisfied with the mark, he will do what he want, somehow, if his friends know about problem, this negative buzz is even worse. Transparency should always come first.
Ask for suggestions of topics to the friends of the brand in social media and respond to their feedback on their publications.
Stay updated on new technologies and language focused on Web Read books, blogs, news sites, attend lectures and converse with other professionals in the field of social media on several issues that are related to digital content.
A good content is the result of much research, study and dialogue. If you show that the brand really understands the subject that is being proposed, will certainly gain more importance in organic search and is indicated as a source of information.
After all, the content also generates loyalty.
Every now and then I discuss the Cloud Computing topic at events and meetings with clients. Cloud is not something to be bought as a box of software, but a computational model that builds over time. The successful adoption of cloud is measured by the benefits that the business will achieve with its adoption.
One question that I always hear is that “Is Cloud Just A Another Name For Outsourcing?”. It is worth discussing this topic here. For me, cloud is another form of outsourcing, yes, but has many differences when compared to usual outsourcing model that we know. With cloud, you can “outsource” specific services and not just the “all-or-nothing” that we commonly see today. The current outsourcing model provides long-term contracts and contracted services in the cloud can be for short times and on-demand (a project or a momentary expansion of computing power) and payments for the use of resources (Pay-Per-Use ).
In fact, IT is increasingly moving toward being a set of external services. We use Google and salesforce.com or we talk about outsourcing of CRM or email … In my opinion this trend will accelerate to the point that will get a technology first and then outsource it in the future. The mental model will first think of outsourcing. The model will be IT-as-a-Service. This change over will affect users and IT outsourcing providers in the coming years, which will have to adapt the new model.
The current challenge is to separate “what is real” from “what is hype“. We still see a lot of information and oversimplification of cloud providers and potential users of these clouds. Cloud can take many facets. The services offered can be cloud infrastructure (IaaS), platform (PaaS) or software (SaaS).
Let’s make an analogy with the Internet. Internet as a whole is somewhat vague, but when subdivided into tangible things like electronic commerce, social media, e-mail, search engines, etc., becomes more tangible and easy to understand its value.
With cloud, it is the same thing. The use of services as IaaS could allow small and medium enterprises to spend technological resources that were previously unthinkable. A company may have 200 or 300 dedicated servers almost instantly and pay only for actual usage. No need to purchase them and install them physically in a data center itself. It will be impossible to install a commercial point of light for just getting signatures, telephony and IT in future business. Your IT infrastructure is basically tablets, smartphones and web browsers. Everything else may be in the clouds.
But the decision to adopt the model of cloud must not be purely tactical. Let us remember the e-commerce and Internet banking. Earlier having many doubts and fears, but it gradually spread and today it is unthinkable to live without them. These two technologies have created new businesses and many others have radically changed (changed internal processes, customer relationships, breadth of markets, etc.) and cloud is likely to have impacts as large or larger.
What we can do today is, we can examine the type of cloud more carefully, to separate the hype from reality, seek partners and suppliers who know what they’re talking about (and trust-inspiring) and start acting. We are, of course, at the beginning of the learning curve and certainly will not see specialists with 20 years experience in cloud!
The number of data centers that are created for the needs of individual businesses and to accommodate third-party resources, is constantly growing. Most often they are located near cities or directly in their line, which has a number of features imprinted to create their energy structure. Often, the design must proceed from an existing building and its heating and electricity. That is what we can call a serious drawback, since electricity is only one source and is totally dependent on supply of the city.
This disadvantage is eliminated in most cases of installation of diesel generators, which include the case of an emergency power outage. But the problem is that, while they work, they depend on the capacity of the fuel tank. We can complement this with UPS power system, working on lead-acid batteries, from which the data center is fed for a short time. In addition, this generator does not ensure smooth operation of air conditioning and heating system.
Micro turbines gas can solve the problem fundamentally, it not only generates electricity, but also provides water heating due to the heat of burnt gas. It is an autonomous system that provides heat and electricity. Another advantage – the possibility of combining gas turbines into clusters, which ensures uninterrupted operation of the system. To operate micro turbines, several types of gas can be used, the most interesting options are biogas and natural gas. Biogas is produced in special plants, which are inexpensive to operate.
Combined-cycle plants have been developed over forty years ago, but not used on a large scale. But now, in response to emerging need to replace equipment on a large number of power stations, Combined Cycle Gas Turbine (CCGT) attracted the attention of engineers. Deployment of PSU can be done in a short time, resulting in a low-budget, independent source of heat and energy, which is very important when building a data center.
Inconvenience for users in the form of idle ATM or branch office for private financial institutions themselves dispense large losses. The frequency of such situations not only determines the degree of confidence in the company, but its survival in the market. Information technology in business processes, in turn, reduces the number of such failures.
A study on the readiness of the financial sector to backup their data was conducted. The study involved banking and insurance companies, large enterprises with a turnover of 3 to 15 billion dollars. Results of the study suggest that companies in the organization of this sector use certain features of information technology. The subject of the study was to ensure continuity and data recovery.
Half of the companies surveyed felt that the priority in this direction is the restoration of business processes, about 13% – only the restoration of IT-services. Due to the continuity of paying only to the foreign financial companies and their subsidiaries – 75% of them are constantly carrying out data redundancy. Among the companies, this figure is much lower – 23%. And in foreign companies, to address these issues as well as units on risk management, this question remains in the conduct of IT-directors.
Your data center are only a quarter of the companies studied, 37% with an annual turnover of 15 billion, and 45% with an annual turnover of 3 to 5 billion, 18% with a turnover of 5 to 15 billion dollars.
The most common causes of failures were breakdowns of IT-equipment, shortages of electricity and the human factor. To reduce the risk of downtime for these reasons, 42% of financial organizations are planning to backup their data to use the services of commercial data centers.
These last weeks have been very busy for me and many of you, for sure.
Among the challenges of day-to-day production increases on weekends, consolidation of infrastructure, project meetings, dealing with backlogs, doing research to collect new features, etc..
In addition, a wave of innovations has been submitted, flooding our minds with ideas and expectations.
But the fact is that from time to time we live in these waves of innovation or small ” revolutions “that challenge our reality. Here are some examples:
And most recently … Utility Computing and Cloud Computing. Of course, some readers may say they do not remember all those battles between ” real world “and” future vision “, but believe me, there have been many times over these past 20/30 years. Other minor battles also took place, the above list is only partial.
And like every wave of innovation, a life cycle is covered, which passes through moments of novelty, technological breakthrough, peak expectancy (the “hipe”), disillusionment, and finally recovering, the productivity gained from maturity. This is quite the model used in a variety of technology analysis, with the following chart:
In this context, ” Cloud Computing ” or ” Cloud Hosting” is the wave of momentum, with its benefits, risks and impact to applications and business models.
I often say that ” a window of opportunity opens up for companies, software vendors and users”, In the same vein, I would add one more concept: what we see in the coming months is to increase the discussion about the consumption of services.
We will refer to “CLOUD” in the same way we refer to as “NET”.
This impact should create some new roles in the market:
We will have cloud providers, which must provide the hardware resources and infrastructure in the form of data centers and software functionality to support the cloud. Microsoft is an example with its network of data center and resources for your platform Windows Azure.
There will be companies that will be build on the cloud services (called “cloud brokers”), combining features of different providers or resources to a platform chosen for the construction of service offerings with high scalability, availability and resilience for its customers . Microsoft is again an example for some service offerings, such as BPOS – Business Productivity Online Suite , but other companies are already using the platform to build Azure offers SAAS – Software as a Services , for example, offering the benefits of elastic computing to its customers.
Finally, we have the corporate customers increasingly consuming services in an intense and transparent method. These consumers will combine services from different brokers with cloud services and applications on-premise (local), taking advantage of investments in its own infrastructure.
This example of combining already appears on several projects around the world that Microsoft has been supporting local infrastructure by integrating with the cloud.
One of the interesting points of the project is its integration of a local infrastructure of third party services and services offered on Windows Azure. A drawing on the general architecture of the project is given below, see:
In short, On-Premise Outsourced Services + + Cloud = Hybrid IT
What we see in the above equation is not only a potential for innovation and agility to build new products and businesses in business. It is also a challenge for integration between environment, development of coordination, monitoring and integrated ALM distributed. On one hand we are in the “hipe”, on the other we have begun a good mapping of current challenges to “cloud services” model.
There are already several solutions for each of these challenges, such as federated security ( CBA – Claim-based authentication ), data synchronization between local and cloud ( SSIS – SQL Server Integration Services ), interfaces and service locations in the cloud ( WS-F PRP – WS-Federation Passive Requestor Profile ), container and local services in the cloud ( Microsoft Windows Azure platform AppFabric ), bus bar services, repositories, integrated monitoring cockpits ( Management API ), etc..
Maybe faster than we imagine, we will face a new reality in our day-to-day productivity and the plateau will be present very soon on the cloud. And sure, new waves will come …
In today’s post, I would like to illustrate the main capabilities found in Microsoft cloud.
We can separate the core platform offerings and capabilities in the following layers:
Starting with Applications Services, a number of services can be found ready, finished, to be used immediately by end users and businesses. Examples of such services are – Bing, Windows Live, Office Live, as well as collaborative networks like Microsoft HealthVault and XBOX Live.
There are also Software Services which offer additional features for infrastructure and enterprise environments such as Exchange Online, SharePoint Online, Office Communications Online and CRM Online. St. Cloud Hosting-based mechanisms that add value to locally placed in businesses environments.
With the ratio of capacities, we find the Services Platform, which adds features such as relational databases, hosting processes, services, interoperability, access control, authentication / authorization, and various services. Here, the developer can work with a number of resources to build their applications in the cloud, or new services and applications that export functionality to the local environment (on-premise) in enterprises.
Finally, this whole series of services are supported by a cloud infrastructure based on Microsoft data center. As we saw previously, technology and process for maintaining such an environment already has existed for some time, after years of experience with services of high scalability and high volume as Hotmail, Windows Update, MSN, among others.
With this post, we completed a brief introduction about the main components of the Microsoft cloud computing platform.
When we talk about cloud computing, we also talk about the types of clouds that we consume (utilize). You remember the taxonomy to cloud, we presented some more posts ago, here on the blog right? We talked about IaaS, PaaS, SaaS and its services. We talk about the capabilities present in a cloud platform exemplifying its own resources of the Windows Azure platform.
Now I would like to highlight the types of clouds that can work. At this point, we can think of public cloud, private cloud and dedicated cloud computing.
Public cloud services are the offers we have on the general market through providers like Microsoft, Amazon, Salesforce.com, IBM, Google, ESDS, among others. In this type of supply, access is via the internet and share data centers with various companies in the world.
Thinking about a more customized solution, private cloud services can deliver the benefits of dynamic provisioning and economy of scale, leveraging the infrastructure of data centers already invested by the company. Thus, companies that own their own data centers can evaluate the model of virtualization and provisioning with the creation of a private cloud, which is only available to users and enterprise applications. A limitation of this model is that, beyond the scalability, it is limited to the size of the investment already made in own data center, the company remains responsible for the routine administration of physical hardware resources, as well as software development and upgrades needed.
Finally, dedicated clouds can offer the benefits of abstraction in a data center market with the isolation and customization that a company can imagine. This type of offer usually involves security matters, as like local users (on-premise) with the users and applications placed in the cloud and allows integrated scenarios, highly flexible hardware and provisioning for businesses.
From these three sets of clouds, it is natural to think of the clouds by addressing issues of authentication, authorization and security between users of each room booked.
We always talk here on the blog about Applications of Composition and its challenges. However, the composition may actually happen at different levels. When we think of architecture composition, we can compose the layers of presentation , service interfaces , business rules and even different data sources.
For each level of composition we can apply different architectural patterns, such as Delegation, Aggregation Command Composition Event, Aggregation, IoC – Inverse of Control, Observer pattern, Publish / subscribe, Dependency Injection, Presentation Separated, Separated Interface Adapter, Composite Views, between the principals.
So when you are considering an architectural composition, it is interesting to evaluate these options. Will we make service calls or webparts in the web interface? Will we compose business rules and processes or relational databases in our application? And for enterprise scenarios, will we make different service areas by bus?
More recently, we had a discussion about increasing services in the cloud computing, where a cloud infrastructure can host new services groups, which are components in our architecture.
The next question is: what is the best level of composition? Should I make the presentation interface or service interface? Should I compose services with local services in the cloud? What services should be in the cloud and what should be hosted on local infrastructure? In databases, is it interesting to compose different sources for my application?
Here comes the important role of architect, you should evaluate each case according to the scenario of business involved. In fact, there is no single, correct answer for all scenarios. It’s wrong to say that we need to combine, for example, 50% in local infrastructure and 50% in the cloud services. Everything will depend on the services involved, the type of user, business application, the local infrastructure and the maturity of your company.
And since I mentioned the “maturity”, which are the processes and mechanisms that you apply Information Technology in the operation of your company? Your Information Technology includes monitoring, logging, exception handling, service management, governance, etc. Without Information Technology processes, any direction to a composition architecture may suffer from increased complexity and distribution of components at all levels.
So stay tuned and be sure to talk with your infrastructure team to understand about the capabilities of existing platform in your enterprise. The architect of infrastructure should be the best friend of architect solution!
Maybe in future there is even the same person … or may be not…