The modern world is changing very fast. Economic control centers are shifting from West to East, from North to South. Economic liberalization is a key driver of this process. The speed with which the smooth movement of capital, the general demographic trends resettlement of people from an isolated rural areas to large cities, has a more rapid pace of life – all these engines also serves liberalization. However, the most important factor in view of the process – this technology.
Industry and IT-technologies are developing faster than ever. The small production, and the ability to adapt quickly to put into operation new technologies quickly, bypass the large-scale “dinosaurs” of the industry, who are committed to long-term investment.
Capital can hardly differ uniform distribution. A recent long-term forecast of HSBC demonstrates that in 2050 the richest countries in terms of GDP will be China, the U.S. and India (in that order). But the richest countries in terms of per capita income (ie, in terms of the wealth of citizens) are Japan, the U.S. and Germany (in that order). Consequently, the rich will remain rich and the poor stay poor, as always happens in our world. Natural resources, particularly water, will cause various conflicts caused by the growth of economic activity.
All of these significant social changes will lead to the emergence of a new middle class, richer than before. The forecast also says that in 2050 the middle class will identify themselves 1-1.5 billion people. These people will be of high purchasing power. They will need access to a variety of public sector services such as education and health care. The above economic activity will require a greater number of communications and IT-services that affect the growth capacity of data processing centers (DPC).
For the first time in human history, geography is not a deterrent. Every person on the planet can communicate with anyone without any restrictions. This has led to the emergence of new applications for communications, such as Facebook and Twitter. It also leads to new and unexpected behaviors of the crowd. The scope and size of social media will continue to grow, and with it the need to grow and the possibility of data centers.
The world continues to move toward the expansion of knowledge, which increases the intensity and speed of information dissemination. It is projected that between 2013 and 2025, the number of students around the world will be doubled, with half of the increase will come from China and India. Mobility of students and increase the popularity of online learning as a way of thinking is changing and getting information, people are more satisfied with the existing networks of global communication.
In the future, we should expect a variety of unexpected effects of a changed way of thinking and take action. The scientific work – a clear confirmation, usually by publication of a scientific edition takes two years, during which a range of treatments, expert evaluation, etc. happens. Why in the world where everyone is waiting for the immediate release of information, there must be a threshold in two years? Does anyone agree to wait for such a long time?
As a rule, the more successful people or companies that have outdated ideas, the more difficult for them will be to produce something new. West and North have always been the leading players in the field of innovation and the capital increase, which began with the industrial revolution in the eighteenth century. In part, this dominance survived to this day. These countries could easily control the flow of ideas and information, which allowed to innovate faster than competitors do.
However, this approach is no longer possible in a world where we are willing to share knowledge with each other. All public data immediately moves from West to East and from North to South. In 2011, the first time in a long history of nearly half of the world’s patents were issued to Chinese applicants. China is an ancient civilization, whose history and thought existed long before most Western civilizations. The Chinese way of thinking is no better and no worse than Western way of thinking, it just has its differences, and as a result of which there are innovations, unexpected for the residents of the West.
The main task for the next thirty or forty years is to quickly grasp all the changes and continuously adapt to new and evolving circumstances. People and companies are able to filter out and use the latest information to prosper, while unable to keep up with the time that will simply drown in information that flows. The core of this process will always be the transmission and processing of data so that the data center in the future has is rosy light.
Unlike desktops, tower servers traditionally use 1U, 2U, 3U, 4U, 5U, 6U or 7U cabinets which are installed on the racks. The numbers give the names of the offices formats that indicate precisely the number of stalls they occupy in the racks. The offices occupy a single 1U bay, the two occupy 2U and 4U occupy four, with one rack holds up to 42 standard size bays:
The servers in 1U format are preferred by dedicated servers hosting service providers, rack space providers and data centers to use because they are very compact (only 4.4 cm), which allows you to install a large number of servers per rack. The main limitations of the format are the limitations with respect to ventilation (due to the small internal space), which complicates the use of processors with high power consumption and the need to use special fonts and coolers, inflating the cost of the projects. In addition to the basic components, usually spare space to install two or four HDDs 3.5″ (according to the arrangement of other components) and a single expansion board installed horizontally with the help of a riser.
Then we have the 2U chassis. They use “normal” fountains and coolers and therefore end up being a little cheaper. The inner space becomes larger and more suitable for 2U servers with two or more processors, or processors that use high consumption. The height is not sufficient to install expansion cards vertically, as in desktop computers, but it is possible to use a riser (like in the case of 1U) using plates or half-height (lower plates, which have half the height of the normal plates).
Finally, we have the larger servers, using 3U or 4U chassis. There are 6U servers, but they are rare: This format is typically used for disk arrays and enclosures for blade servers. Use of a 3U cabinet or greater completely eliminates the problems with space, allowing you to use expansion cards vertically and a large number of installed hard drives in removable bays, but causes the server to occupy more space in the rack, which increases the costs to host it in a data center, where you pay an extra fee per pen used.
Another format that is becoming increasingly popular is the blade servers (blade comes from the word “blade” indicating the restricted format), an ingenious idea to further increase the density of servers and allow sharing of components in common, such as power supplies and optical discs.
The idea is that, instead of having 10 1U servers, with 10 sources (or 20, if redundant sources were used), 20 network cables (each server typically uses two cables, one for the network and one for management or redundancy) plus power cords, cables used by KVM and so on, you can use a single cabinet, with an equivalent number of blade servers.
Each blade is a server complete with processor, memory, disks and network card. Due to the small size, blade servers typically use low power processors and 2.5″ hard drives. Earlier, it was common to use processors from Transmeta and VIA, but they ended up being almost completely replaced by Core 2 Duo processors and updated Xeon versions (in case of Intel) or Athlon X2, Opteron or Phenom (in the case of the AMD), which is much faster, but still relatively inexpensive. The case of hard disks, the disks of 2.5″ are preferred by offering lower access times (although lost with respect to transfer rate), and the power consumption and small dimensions.
Servers in cabinets also exist and account for nearly one quarter of the servers sold, not counting the mounted servers, which use components and enclosures desktop PCs. Besides being the most common in local networks and companies with fewer servers, where the issue of space is not a problem, they are also becoming common in many data centers, breaking the tradition of some racks.
Although occupying a little more space than a 2U enclosure, the towers offer the advantage of the fact that they are cheaper, which in many cases offset the greater use of space. Here we have photos of one of the sections of a data center where towers are used:
Everything in life requires care and maintenance. Whether it is relationships, health, your vehicles, business or anything. There is a small part of all the important things that we need to pay attention to. And, of course, the data center is no exception. If you want your data center to give excellent performance while maintaining the high level of reliability and availability, you should apply the appropriate effort to work your system that allows you to maintain a given level of performance.
Why Do I Need Maintenance?
The answer to this question is obvious, and even many administrators who pay little attention to service, probably still have to recognize its importance. Most of these administrators and operators of data centers are likely to refer to a lack of time or resources (money) to sell and use all of the required maintenance procedures. However, let’s take the example your car. Did you ever put off an oil change or ignore indicator on the “check engine” because you just do not have the time or money for an immediate solution to the problem?
Some data center service providers postpone maintenance on the back burner – sometimes permanently. But in the data center, which does not receive regular maintenance, there is a growing risk of a stop or a situation that will affect the performance – as in the case of cars. For example, if the filter in the dusty air conditioning unit can increase fan power. The loss of productivity resulting from lack of maintenance, can produce a cumulative effect that can have bad consequences. One of the main consequences visible to us is that the lack of proper maintenance (I would have included a load management and performance) leads to deterioration. For example, users need more time to complete pending tasks or productivity falls automated systems that cumulatively can have a greater impact on performance. For example, the degradation of the system by 10% in 10 hours is equivalent to one hour of inactivity. The problem is that all show and begin to take action when the system breaks down. A very few people have ever noticed 10% degradation of system performance.
Thus, the problem of service may not necessarily occur unexpectedly. They can sneak up unnoticed and result in poor performance in different systems, and cost you a small cost, but in the long run, it will cost the company a lot of money. While most data centers, probably do not do extensive scientific analysis of downtime in order to determine the causes that led to such an unpleasant event. Frequently, the data center administrator give more time just for backup and monitoring. And quite often the main cause of downtime is inadequate maintenance. A properly organized maintenance avoids downtime. We estimate that 30-40% of system outages are caused by equipment failure and can be prevented with an appropriate preventive maintenance.
How Much Maintenance In The Data Centers?
If the service is not part of the strategy of data center management, one of the methods that can be used to estimate the cost of your operations, is, first, to calculate the annual cost of downtime to your data center and the multiplication of the data by 30% or 40 % – this is the annual rate of downtime in which costs inadequate maintenance. Imagine that you could invest that amount of money in maintenance. Most likely, the investment increase the efficiency of data center, not to mention reducing the likelihood of downtime.
Evaluation Of Data Center Downtime
Exact figures on the cost of downtime, maintenance costs and return on investment in the service will depend on the configuration and the needs of the data center. For example, in a data center, which uses free cooling technology will be requiring less equipment service than in the data center, which mainly uses more traditional methods of cooling, for example, where the use of air power air conditioning is done. But with few exceptions, in the long-term, maintenance can be cheaper than a simple reduction in operational efficiency. But if the service equipment (IT, cooling and power distribution) costs too much, it would be better to replace it, but not to perform its maintenance.
What Requires Maintenance?
In one word – everything. However, one system requires less maintenance, other – more. For example transformers, power distribution units (PDU) and the air distribution system and water require little maintenance, while equipment such as modules CRAC, fire extinguishing systems, chillers and generators require a high level of service. Other equipment, such as uninterruptible power the next generation of uninterruptible power supply (UPS), may require only a medium level of service. But all the systems in the data center requires a certain level of service. It is equally applicable to all systems – dedicated servers, storage, networking and power equipment.
In some data centers there are some areas which usually suffer from a lack of attention to maintenance such as switching equipment, circuit breakers, ATS (automatic transfer switch) and the PDU, as well as critical system, such as UPS, batteries and equipment cooling systems, air conditioning and ventilation.
However, there are some tactics to quickly and easily identify problems in the system before they cause downtime. For example, you can use thermography, based on the methods of infrared scanning, which allows us to localize the sources of a number of problems.
Using the methods of infrared scanning can detect areas with unusually high temperatures, which can cause deterioration of the components, and bad electrical connections due to vibration, improper torque and other hidden problems. This allows data center administrators to find and fix the problem before it becomes a problem of availability of IT equipment.
The Use Of Computational Fluid Dynamics (CFD)
Computational Fluid Dynamics (CFD) technology allows the administrator of the data center to utilize the appropriate software, or by using a third-party service provider to model air flow and temperature distribution in the existing data center.
With this information, you can do the appropriate adjustment of the cooling system in order to minimize the influence of the “hot spots” and other thermal problems, which over time can cause damage to equipment and cause it to stop. Although the method of CFD can be costly, providers of services and software usually offer various options to operators of data centers, and the CFD does not have to be “everyday” type of service. This method can be considered as a method of optimizing performance.
Simple Maintenance Procedures
Simple maintenance procedures can prevent problems by paying attention to some of the commonly overlooked area. For example, commonly occurring problems such as the lack of free disk space can be easily prevented, but often they cause problems, depending on the location of the problem, it can lead to cardiac applications. In this case, such measures as regular monitoring and periodic checks of disk space may be sufficient to prevent other serious problems. In other words, not all aspects of the maintenance of the data center can be complex and costly, sometimes quite short, regular checks.
Maintenance should be among the top three priorities for the administrator of the data center. In other words, it must be a high priority. Of course, data center managers face many challenges, from the harmonization of management and technical personnel to the planning and monitoring of equipment upgrades and daily operations. However, despite all these responsibilities, maintenance is one of the tasks that should not suffer – and with a well thought out strategy and schedule maintenance tasks, and will not have to suffer.
Here is a broad definition of the zones, which should draw the attention of the administrator of the data center in the planning and execution of maintenance tasks:
Given that lack of attention to maintenance is the main cause of one-third or half of downtime in the data center, who can afford to ignore it? Often the maintenance of equipment may seem tedious and sometimes pointless exercise, and perhaps if it is run regularly, you deserve great recognition. Regular comprehensive maintenance can also increase the effectiveness of systems for the data center, and it receives the benefits that are not limited to factors such as stress reduction and cost savings associated with the data center in which there is a smaller number of unplanned outages. The benefits of investing in service is well worth the cost.
Previously, almost all communication was done via intercontinental satellite: telephone calls to TV signals, almost everything was received and sent through them. At the beginning of the Internet, satellite links were also very common, but the high cost, very high latency and low bandwidth eventually causing the satellite to lose the race for the fiber links. Today, satellites are used as links as backup and remote access option for remote areas, but more than 99% of traffic goes through intercontinental fiber links, which today involve 6 of the 7 continents, leaving out only Antarctica.
Transatlantic communication cables are not a new thing. In fact, in the late 19th century there was already a large mesh of telegraph cables linking Europe, USA, Africa, Asia and Oceania, most of them comes under the British Empire. At that time, copper wires were used and there were no repeaters, so that it was necessary to use a very high voltage on one side to obtain a weak and noisy signal of another. Throughout the 20th century several new cables were installed in order to meet the telephone companies, but it was only in recent decades that the transatlantic links gave a big leap, going to meet the Internet.
Although relatively thin (about 7 inches thick) submarine cables are built to be quite resilient. Beneath a thick layer of polystyrene (1) have a layer of mylar (2), multiple steel cables, designed to make the cable mechanically resistant (3), layers of aluminum and polycarbonate, which guarantee protection against water ( 4, 5), a copper tube (6) a gel layer (7) and finally the bundle of fiber, which are really important:
Besides the use of high quality fiber, the links include solid state optical repeaters, integrated to the cables at intervals of about 100 km. They relay the degraded signal, allowing links to be extended for thousands of miles. These repeaters are powered by energy that is transmitted through the cable itself, making ground stations are really necessary only at the points where you need to perform packet routing or integrating multiple links.
These cables are installed on the ocean floor at a cost of several billion dollars with a specialized vessels. The work is done in two stages, with the ship moving slowly and dumping the cable on the seabed and connected to an robotic installer digging a shallow grave on the ocean floor and burying the cable at the same speed. For signal degradation, the cable needs to be installed perfectly straight, which demands a particularly precise navigation.
Currently, it is possible to transmit 40 gigabits per fiber strand and each cable is composed of a large number of wire, taking the total capacity to transmit terabits of the house, according to the cable capacity and specific percentage of dark fiber ( fiber links that are not being used):
Submarine cables are complemented by the cables installed on land, which are usually installed along the roads or power transmission lines or gas, creating a fiber mesh that spans all major cities. Other modes of access such as ADSL, Cable, 3G etc.. these links are connected to fiber, enabling subscribers to gain access to large network.
Within the cables, the signal travels almost at the speed of light and even with the delay introduced by repeaters and routers along the way, you can get very good latency. Nowadays, you can get pings between Nasik Data Centers and London below 250 ms, which would be unthinkable in the early days.
Still, new cables are continually being installed, in order to increase capacity or reduce latency. A recent example is the new stretch linking the UK to Japan, passing through the Arctic Ocean north of Canada, at a total cost of more than $ 3 billion (for the installation of three cables, along with the control stations routers, etc..) in an effort to reduce the latency of transmissions between the UK and Asia.
“Focusing on the Research and Development of highly innovative ideas and the solutions designed from such innovations is what will help the IT companies in today’s world to succeed”, said Dr. Vijay Bhatkar, Renowned Scientist and IT Leader of India, during the inauguration of extended Data center of ESDS Software Solution Pvt. Ltd in Nashik.
Dr. Bhatkar was in Nashik yesterday to inaugurate the extended Data center of ESDS which also marked the launch of two pioneering IT solutions from ESDS: Cross Platform Disaster Recovery-as-a-Service and eMagic – the Data Center Management Suite.
During his speech, Dr. Bhatkar highlighted the exponential growth of Indian Information and Communication Technology (ICT) industry over the last two decades, from 100 crores in 1976 to a whopping 8 Lac Crores this year. He further added that the ICT industry currently contributes around 10% of the total national income, which is expected to grow to 25% by the year 2020-25. India has been providing great services to the world, but the time has now come to develop innovative products like the eNlight Cloud, eMagic and the Cross-platform Disaster Recovery solution and start competing with the world. The IT industry was expected to grow beyond the major IT hubs like Bengaluru, Pune, Mumbai, Hyderabad etc. but the growth hasn’t reached smaller cities of India, and in this context, the success of ESDS is ground breaking and pioneering. ESDS has shown to the world that innovative products can be developed by smaller teams through dedication, innovation and hard work. The Indian IT industry should expand outside India through acquisitions and that’s what ESDS has done, which is another major achievement. He cited that we are in the Age of Innovation and one innovation can create wealth equivalent to a Nation’s wealth. There are very few organizations in IT today that have a major focus on R&D, and hence Dr. Bhatkar felt that organizations like ESDS stand a very good chance of collaborating with government agencies to further increase the benefits of innovations in IT.
ESDS has been providing hosting and data center services to companies across the globe, which is a result of consistency, sincere efforts and innovation. With a modest team of around 30 research and development professionals, ESDS has been at the forefront of innovations since its inception through its offerings like Managed Hosting services, Managed Datacenter, Virtualization, eNlight Cloud followed by the launch of eMagic and Cross-platform DR solution yesterday.
Development of eNlight was an important milestone in the Cloud Computing industry, and so is the launch of eMagic and Cross-platform DR solution. Dr. Bhatkar mentioned that managing various components of a data center and its overall operations is a highly complex task where several organizations are hosting their solutions, systems, business portals in the datacenter, which needs to be managed extremely efficiently to ensure uptime. He congratulated ESDS for focusing on R&D to develop innovative products, which he felt is the need of today’s IT industry. Speaking about the difficulties in Cross-platform DR implementations, he acclaimed and congratulated the ESDS team for inventing such a wonderful solution.
Emphasizing on various versions of supercomputers created by C-DAC and the computing power in PetaFLOPs, exaFLOPS, he explained the importance of these supercomputers in weather forecasting, various scientific, seismic and space calculations. He also recommended that the government agencies and private organizations should come together to create even powerful supercomputers which can do very high volumes of mathematical operations per second. He also mentioned that the infrastructure available at ESDS datacenter can possibly be used as an extension of these supercomputers for mission critical problem solving, satellite image processing, space vehicle launch calculations like Chandrayaan, locating oil reservoirs, designing new cars, reverse-engineering the brain etc.
ESDS has shown the way to the world and has a long way to go for competing with global organizations through innovations and sustained efforts and Dr. Bhatkar indicated that ESDS has the resources and zeal to be able to do so.
Mr Piyush Somani, MD & CEO, ESDS Software Solution Pvt. Ltd, provided an overview of the two products that were launched during this event. He said, “The offerings launched today are aimed at further extending services to our end customers and client organizations. eMagic consists of algorithm to tackle malware and hacking attempts, which is hard to find in the popular data center management suits in the market. The cross-platform DR solution based on eNlight Cloud alleviates the need to have exactly similar hardware at the DR site, which results in huge savings, further enhanced by the inherent features of eNlight like auto-scaling, pay-as-you-consume, scaling without reboot etc.” The sustained efforts and dedication of ESDS employees extends their efforts tenfold ensuring that ESDS continues to enlighten the world with innovative products every year.
The teams behind development of eMagic and Cross-platform DR, which included Mr. Narendra Bhole, Mr. Anil Chandaliya, Mr. Vivek Kharpude, Mr. Vikash Kumar, Mr. Pravin Sonawane were felicitated by the hands of Dr. Bhatkar. The winners of “CSI Young IT Professional” award, Mr. Rushikesh Jadhav and Mr. Hussain Dahodwala were also felicitated on this occasion.
This article is about the development of data centers and cloud computing in the country. We have seen that extensive use of the cloud computing and big data is done for latest database technologies in other articles. We have read about measures that enhance the presence of the data center in our area, the fundamental basis to develop cloud computing services, reference is made to new energy efficiency Standards in the management of such facilities, and etc.
The realization of Data Center according to the criteria of efficiency gains in the following paragraphs that fits perfectly in the objectives of innovation, economic growth and competitiveness in the implementations identified in the strategy of development of the program.
This can help in the economic growth of the territory. A smart, because it will have a direct impact on the rapidly increasing services available to improve computer literacy, skills and inclusion in the digital world.
Four pillars of the Digital Market focus on the design of a Data Center in accordance with the best practices identified:
Digital Single Market: Bringing benefits in terms of access to content, the simplification of cross-border online transactions and helping to improve perception of the quality of the digital service.
Trust and Security: On the management of data, transactions and respect for the fundamental rights of reservation to privacy.
Interoperability and Standards: With the imposition, the adoption and diffusion of technologies and best practices reproducible for other contexts.
Research and innovation: Just as in the corresponding of investment objective of these technologies, it is a significant boost to innovation, thanks to the experimentation of solutions.
Special attention is also placed on the requirements for the reliability of the data center, thereby identifying four different levels of classification based on the ANSI/TIA-942 standard:
Tier I : Reliability equal to 99.671%, ie 28.8 hours retainer services in the course of a year. At this level, the cooling systems provide only one route of delivery and the components are not redundant.
Tier II : Reliability equal to 99.741%, or 22 hours of stationary services in the course of a year. At this level, the components have redundant paths but still single, data centers will be composed with raised floors, UPS and generators.
Tier III : Reliability equal to 99.982%, or 95 minutes of stationary services in the course of a year. At this level, the paths of power and cooling are multiple, but only one can be activated at a time. The capacity is sufficient to support the load when only one path is in operation.
Tier IV : Reliability equal to 99.995%, or 26 minutes of stationary services in the course of a year. At this level, all components are fault-tolerant, the cooling paths are multiple, independent and active at the same time.
Our goal in the construction of the next data center will be to reach the fourth level of efficiency, and score a relationship as possible between optimal data center power consumption and actual power used by the IT equipment (the so-called PUE, Power Usage Effectiveness).
In a sense, there is no concept like “small or big” if the company is largely dependent on its data center for its core business functions. Even small simple (time or scale) can be reflected on the income and reputation. You may be ready for a serious outage, but are you also willing to be less serious about downtime?
The Causes Of Disruptions Of The Data Center
Data centers are complex systems with a high level of internal communication, which require the proper functioning of a large number of sub-systems to the object could provide services. Unfortunately, all too often means that one seemingly minor mistake, accident or event can cause a sudden stop of the system.
Take button EPO (emergency power off): One employee is enough to take the button for the device to open the door, and the entire data center will be disconnected from the power supply. Wikipedia recently disconnected from the Internet by fiber optic cable break in the data center, and during the Olympic Games for a while to stop work as a result of Twitter’s system failure (and, interestingly, almost simultaneous failure of a backup system) in its data center.
From all this we can conclude that to stop the data center, it requires a lot less than a hurricane, earthquake, disconnected from the mains or a malicious attack. And if the company perform basic business functions (eg, retail sales over the Internet) is dependent on the data center, each minute of downtime equates to lost revenue. In addition, customers who come to your site or otherwise attempting to gain access to your services, get an error message, can simply turn to a competing provider or retailer, and in this case, you can lose not only a business transaction, and the client whole, that is all the income losses that he could bring. And, perhaps, customers will be indifferent to the break in service, most of them will not long tolerate the unavailability of services at a time when they need them.
“The Service Disruption Continuum”: The destructive events may not necessarily be a serious accident that will destroy your business. This may be a relatively small network card malfunction, or such a devastating event as a sudden regional disaster that not only destroy your data center, but also will damage the nearby roads, bridges and other infrastructure.
Preparations For Minor Interruptions
There is not a foolproof system. Things cannot refuse. Therefore, if we follow the theory of probability, in time data center, regardless of the level of safety, may be damaged. Of course, you should take all possible measures to prevent the stops, for example, set up systems and redundant components to avoid situations in which the failure of one element can become the cause of failure, but also need a plan of action in case of disruption. In many cases, the difference between “big” and “small” can be negligible downtime. In other cases, it may not be so.
For example, the failure of the system, in which the services are still available, but are loaded very slowly, may be no better, if not worse, than a full-scale failure. You probably know how unpleasant slow loading website is: you spend a lot of time, and then still in a rage closes the window. Therefore, the procedures for small outage may be similar to a plan of action for a longer outage. In any case, the main importance is the training, which will minimize damage to the business. Here are a few tips.
Advance Planning: Perhaps the most important step in terms of quick recovery after power – both large and small. If you start to make a plan of action, when they will be disconnected from the power supply, you’ll be at a disadvantage. Previously assigned a person that you want to be contacted in case of the event – and it may even depend on the extent of failure. Develop procedures to identify and correct the problem. Keep a list of service providers to which you will have to ask for help in the event of failure of some systems, such as uninterruptible power supplies. And, most importantly, organize all this information and place it in a location where it can be easily accessible to those who may need it. Planning ahead allows you to quickly resume operation of the data center – and business.
Backing Up Your Data: Often most people consider insurance policies as unnecessary costs that do not benefit. But when disaster breaks out, insurance policies, more than compensated. The same applies to backups of critical data. This procedure seems to be a waste of time and money before the data loss. Then back up is fully justified. However, regular backups should be created during the normal operation of the system. It will be useless or almost useless exercise, when the stop has already occurred.
Deploying A Management / Monitoring Data Center Infrastructure: Key short-term solution is to detect idle its causes. With torch and a multimeter you can hardly do it. You will need a central access to the information and status of systems, in order to enable you to quickly identify and locate problem areas.
Peak Usage Tracking Data Center: Periods of maximum or peak usage data center may be the best time to find potential problems before they cause a stop. Also at this time you should be most ready to stop.
Preparing to brief disruptions in like preparing for a prolonged disruption. Momentary interruption can have a smaller impact on the business, but the problem still needs to be addressed, so that it did not grow into a more serious problem. Smaller outages also may signal the existence of a more serious problem, which may eventually lead to downtime. But in any case, you should take steps to prepare for delays. They can occur in your data center, but if you prepare for them in advance, it will allow you to keep revenues and reputation in the eyes of customers.
Over the years, I have seen many data centers and this has helped in gaining knowledge a lot. Let me share with you three things I have always seemed really confusing the layman and for which I have always seen talking faces with surprises! If you have other suggestions as well, add in the comments.
Security For Entry Into A Data Center
A data center is a physical place that must first be sure: the server on the inside, as well as other devices are in operation and it cannot stop for any reason, this implies a set of rules related to safety. The data centers are monitored (or should be), often by armed guards, both outside and inside them: access is regulated very precisely who is accessing the badge is dedicated to the individual, and in the data center this is compartmentalized. The employee who is responsible for a particular service can access only a few rooms of the DC, not all. All inputs are registered.
In many data centers, automation plays a central role on the security front: remember that Level3 employs at NOC monitor the data center, everything is authorized, and so the cameras and scanning of the hand or a thumb to enter the rooms are controlled externally.
Why this level of security? The data and information contained in a data center is very valuable, in the case of financial data or government data, of course, this value increases, therefore, like a bank, a data center has very high safety standards.
The Data Center Is Interconnected With Several Operators And “Talk” With The Rest Of The World
Within a data center, usually converge various network operators (national and international). This is to ensure that the data center is connected with more lines towards the outside, and can then communicate through more operators: there are dedicated channels and the channels leading to the interchange points, for effective peering of bandwidth.
Depending on the size of the ISP that manages the data center and the nature of the DC (if precisely for an ISP or a provider of telecommunications) the scope of bandwidth can vary: from some hundred Mbit / s to several Gbit / s of capacity traffic to the outside. If you want to get an idea of the traffic passing, this chart below will help you. For a data center, the value can be reduced, say around 1 or 2 Gbit / s on average for an operator not too big.
The bandwidth used and carried out obviously has a cost to the ISP and then managed carefully, usually by a department specifically in charge of the network of data centers. The people of this department also deal with design and optimize the way in which that data center reaches certain foreign countries, there may be special requirements for customers who need to achieve low latency, some destinations in the world, and this often involves a lot of work acquisition of sections of other operators, the routing on your network.
The Higher Cost Of A Data Center Is Often The Cooling
For every dollar spent to feed their data center, we can say that there is a good proportion spent to cool the data center, obviously trying to achieve the best possible efficiency. The cooling is an important item in the energy consumption of one of the centers of the network: all server and other systems need to stay within a controlled temperature, which vary depending on the technology and the strategies employed by the operator of data centers.
If the temperature is high, could cause serious damage to not only dedicated web servers, but also to other apparatus, bringing in more severe cases the need to turn off these machines, which as said, must instead remain operational 24×7. What temperature are in a data center India? Depends as said by many factors, the average is around 32/40 degrees Celsius in summer, this value may change depending on the position of the DC and the strategies chosen by the provider.
I am constantly participating in events on Cloud Computing and one of the coolest things about these events are the conversations on a cup of coffee, those intervals where good ideas can be exchanged. A recurring theme that arises in the discussions now and then is SaaS (Software-as-a-Service), and just listing some interesting questions worth sharing here.
Summarize, informally, expectations that heard in conversations and have listed. First were cost issues, such as reducing the cost of capital (capex) and operating costs (opex), convert fixed costs into variable and simplify the management of applications. Then, nearly tied with expectations of cost reduction, appear to speed implementation, speed time-to-market and improvements in business processes.
Analyzing these data, it became clear that the expectations of CIOs and business executives with whom I spoke was that SaaS should not only reduce capex avoiding the costly purchase of licenses, but also opex, making the operation of the applications become cheaper than keep it in the on-premise version.
This is very much in line with the concerns of businesses today. A recent survey of 500 CEOs in showed that it takes away the sleep of these executives, such as the country’s economic situation facing the global crisis, competition increasingly fierce, the consistency of the internal market and the lack of skilled labor. How this is reflected in IT and CIOs? Reducing costs, but at the same demanding more agility and efficiency. In short, the maxim “do more with less” is more current than ever.
Thus, SaaS solutions have to clearly show these advantages for users to implement. In parallel talked to many executives of software companies and it is clear to me that although they know that they should enter the SaaS world, many do not have a clear idea of how and when to make this transformation. Moreover, fear of cannibalizing its current business model compared to a model that is not adequately understood.
But gradually we see that the suspicions and questions begin to be broken. Successful examples appear here and there. The global software industry as a whole, is already moving in this direction and maybe in a few years, by the late (?). Most software is already being marketed by the SaaS model.
Also noticed a latent concern in CIOs I talked. As SaaS attracts more users, there is the threat of what we call “shadow IT” ie those applications that are at a click (and a credit card) away, allowing users to implement a SaaS application without knowledge of the IT area. An interesting question was raised by a CIO. His company is planning to develop and make available to its customers a suite of mobile applications that will run on a public cloud hosting solution. And he is unsure about how to integrate these applications with enterprise systems and maintain data security, and support solutions that provide customers get when possible (but likely) problems arise in the use of these applications. Well, there are some technological solutions integration between mobile applications and internal systems. But it is clear that it will not be solved just by this. Change in processes and even the skills of IT professionals will also be required.
The “Shadow IT” is a challenge. If users begin purchasing apps without IT knowledge (and it’s hard to argue against an allegation of an area of business that the SaaS application that will make more money) a time bomb is armed. Sooner or later many of these applications will require integration with other, if they are in other clouds, or are on-premise legacy systems. “Shadow IT” is not a nightmare to wake dissipates. It is something very likely to happen if the IT department is not agile enough to set the game rules regarding the use of SaaS software.
Talking to some CIOs, raised together some points that they should include in the rules of the game for their companies to adopt SaaS and acquisition model on their own. How about calling it BYOA (Buy Your Own Application)?. We also found that users must be aware of the risks of business continuity if the SaaS application is not offered by a provider that meets minimum requirements of resilience in your data center.
Anyway, the result was that in practice it will balance the risk to the business with the value that the application will bring to the company. And the areas that users who opt for a “Shadow IT” solution are fully aware of the pros and cons. Thus, IT will act as an ally of the process and will not be a barrier in the way. After all, barriers are bypassed sooner or later.
ESDS Data Center considers generators as a key issue in the reliability of the data center. Alongside a line UPS on battery to an emergency generator should be a solution to be considered by all operators of data centers. The question has become increasingly important at a time when major natural events such as Sandy caused damage to power lines, followed by longer periods of power blackout. Below there are some questions to consider before, during and after the installation of a generator.
What we need to consider before installing a generator in a Data Center?
Type And Rating Of The Generator: The generator will be classified as power source required Optional standby?
Size Of The Generator: When choosing the size of a generator, it is important to consider the total power load in addition to the expected increase of the load.
Fuel Type: It will be a diesel generator or gas? There are pros and cons to both.
Installation Location: Where you will install the generator? Its installation is internal or external?
Requirements In Emissions: What are the emission limits of spent fuel in your area?
Execution Time Required: What is the expected execution time for the system of the generator? How much fuel it needs at hand to make the time to put into execution required?
Commissioning: What is your startup plan? What types of operations are required?
Testing Of The Load: How to log in after load testing? Have access to a bank of non-linear load with power factors appropriate to test the generator?
Revision: What will be planning the audit for the generator?
Service Agreement Of The Generator:
Preventive Maintenance: Preventive maintenance should be performed at least twice a year. If you see the installation of the generator as a critical task for business, it is best to plan a maintenance program with fixed deadlines.
Monitoring: How it is connected to the monitoring system of the generator? Who is monitoring the system for a possible failure of the generator and ATS systems?
Regular Testing And Maintenance: How often do you have to test the generator and maintain? How should this be done by testing?