The modern world is changing very fast. Economic control centers are shifting from West to East, from North to South. Economic liberalization is a key driver of this process. The speed with which the smooth movement of capital, the general demographic trends resettlement of people from an isolated rural areas to large cities, has a more rapid pace of life – all these engines also serves liberalization. However, the most important factor in view of the process – this technology.
Industry and IT-technologies are developing faster than ever. The small production, and the ability to adapt quickly to put into operation new technologies quickly, bypass the large-scale “dinosaurs” of the industry, who are committed to long-term investment.
Capital can hardly differ uniform distribution. A recent long-term forecast of HSBC demonstrates that in 2050 the richest countries in terms of GDP will be China, the U.S. and India (in that order). But the richest countries in terms of per capita income (ie, in terms of the wealth of citizens) are Japan, the U.S. and Germany (in that order). Consequently, the rich will remain rich and the poor stay poor, as always happens in our world. Natural resources, particularly water, will cause various conflicts caused by the growth of economic activity.
All of these significant social changes will lead to the emergence of a new middle class, richer than before. The forecast also says that in 2050 the middle class will identify themselves 1-1.5 billion people. These people will be of high purchasing power. They will need access to a variety of public sector services such as education and health care. The above economic activity will require a greater number of communications and IT-services that affect the growth capacity of data processing centers (DPC).
For the first time in human history, geography is not a deterrent. Every person on the planet can communicate with anyone without any restrictions. This has led to the emergence of new applications for communications, such as Facebook and Twitter. It also leads to new and unexpected behaviors of the crowd. The scope and size of social media will continue to grow, and with it the need to grow and the possibility of data centers.
The world continues to move toward the expansion of knowledge, which increases the intensity and speed of information dissemination. It is projected that between 2013 and 2025, the number of students around the world will be doubled, with half of the increase will come from China and India. Mobility of students and increase the popularity of online learning as a way of thinking is changing and getting information, people are more satisfied with the existing networks of global communication.
In the future, we should expect a variety of unexpected effects of a changed way of thinking and take action. The scientific work – a clear confirmation, usually by publication of a scientific edition takes two years, during which a range of treatments, expert evaluation, etc. happens. Why in the world where everyone is waiting for the immediate release of information, there must be a threshold in two years? Does anyone agree to wait for such a long time?
As a rule, the more successful people or companies that have outdated ideas, the more difficult for them will be to produce something new. West and North have always been the leading players in the field of innovation and the capital increase, which began with the industrial revolution in the eighteenth century. In part, this dominance survived to this day. These countries could easily control the flow of ideas and information, which allowed to innovate faster than competitors do.
However, this approach is no longer possible in a world where we are willing to share knowledge with each other. All public data immediately moves from West to East and from North to South. In 2011, the first time in a long history of nearly half of the world’s patents were issued to Chinese applicants. China is an ancient civilization, whose history and thought existed long before most Western civilizations. The Chinese way of thinking is no better and no worse than Western way of thinking, it just has its differences, and as a result of which there are innovations, unexpected for the residents of the West.
The main task for the next thirty or forty years is to quickly grasp all the changes and continuously adapt to new and evolving circumstances. People and companies are able to filter out and use the latest information to prosper, while unable to keep up with the time that will simply drown in information that flows. The core of this process will always be the transmission and processing of data so that the data center in the future has is rosy light.
Unlike desktops, tower servers traditionally use 1U, 2U, 3U, 4U, 5U, 6U or 7U cabinets which are installed on the racks. The numbers give the names of the offices formats that indicate precisely the number of stalls they occupy in the racks. The offices occupy a single 1U bay, the two occupy 2U and 4U occupy four, with one rack holds up to 42 standard size bays:
The servers in 1U format are preferred by dedicated servers hosting service providers, rack space providers and data centers to use because they are very compact (only 4.4 cm), which allows you to install a large number of servers per rack. The main limitations of the format are the limitations with respect to ventilation (due to the small internal space), which complicates the use of processors with high power consumption and the need to use special fonts and coolers, inflating the cost of the projects. In addition to the basic components, usually spare space to install two or four HDDs 3.5″ (according to the arrangement of other components) and a single expansion board installed horizontally with the help of a riser.
Then we have the 2U chassis. They use “normal” fountains and coolers and therefore end up being a little cheaper. The inner space becomes larger and more suitable for 2U servers with two or more processors, or processors that use high consumption. The height is not sufficient to install expansion cards vertically, as in desktop computers, but it is possible to use a riser (like in the case of 1U) using plates or half-height (lower plates, which have half the height of the normal plates).
Finally, we have the larger servers, using 3U or 4U chassis. There are 6U servers, but they are rare: This format is typically used for disk arrays and enclosures for blade servers. Use of a 3U cabinet or greater completely eliminates the problems with space, allowing you to use expansion cards vertically and a large number of installed hard drives in removable bays, but causes the server to occupy more space in the rack, which increases the costs to host it in a data center, where you pay an extra fee per pen used.
Another format that is becoming increasingly popular is the blade servers (blade comes from the word “blade” indicating the restricted format), an ingenious idea to further increase the density of servers and allow sharing of components in common, such as power supplies and optical discs.
The idea is that, instead of having 10 1U servers, with 10 sources (or 20, if redundant sources were used), 20 network cables (each server typically uses two cables, one for the network and one for management or redundancy) plus power cords, cables used by KVM and so on, you can use a single cabinet, with an equivalent number of blade servers.
Each blade is a server complete with processor, memory, disks and network card. Due to the small size, blade servers typically use low power processors and 2.5″ hard drives. Earlier, it was common to use processors from Transmeta and VIA, but they ended up being almost completely replaced by Core 2 Duo processors and updated Xeon versions (in case of Intel) or Athlon X2, Opteron or Phenom (in the case of the AMD), which is much faster, but still relatively inexpensive. The case of hard disks, the disks of 2.5″ are preferred by offering lower access times (although lost with respect to transfer rate), and the power consumption and small dimensions.
Servers in cabinets also exist and account for nearly one quarter of the servers sold, not counting the mounted servers, which use components and enclosures desktop PCs. Besides being the most common in local networks and companies with fewer servers, where the issue of space is not a problem, they are also becoming common in many data centers, breaking the tradition of some racks.
Although occupying a little more space than a 2U enclosure, the towers offer the advantage of the fact that they are cheaper, which in many cases offset the greater use of space. Here we have photos of one of the sections of a data center where towers are used:
Everything in life requires care and maintenance. Whether it is relationships, health, your vehicles, business or anything. There is a small part of all the important things that we need to pay attention to. And, of course, the data center is no exception. If you want your data center to give excellent performance while maintaining the high level of reliability and availability, you should apply the appropriate effort to work your system that allows you to maintain a given level of performance.
Why Do I Need Maintenance?
The answer to this question is obvious, and even many administrators who pay little attention to service, probably still have to recognize its importance. Most of these administrators and operators of data centers are likely to refer to a lack of time or resources (money) to sell and use all of the required maintenance procedures. However, let’s take the example your car. Did you ever put off an oil change or ignore indicator on the “check engine” because you just do not have the time or money for an immediate solution to the problem?
Some data center service providers postpone maintenance on the back burner – sometimes permanently. But in the data center, which does not receive regular maintenance, there is a growing risk of a stop or a situation that will affect the performance – as in the case of cars. For example, if the filter in the dusty air conditioning unit can increase fan power. The loss of productivity resulting from lack of maintenance, can produce a cumulative effect that can have bad consequences. One of the main consequences visible to us is that the lack of proper maintenance (I would have included a load management and performance) leads to deterioration. For example, users need more time to complete pending tasks or productivity falls automated systems that cumulatively can have a greater impact on performance. For example, the degradation of the system by 10% in 10 hours is equivalent to one hour of inactivity. The problem is that all show and begin to take action when the system breaks down. A very few people have ever noticed 10% degradation of system performance.
Thus, the problem of service may not necessarily occur unexpectedly. They can sneak up unnoticed and result in poor performance in different systems, and cost you a small cost, but in the long run, it will cost the company a lot of money. While most data centers, probably do not do extensive scientific analysis of downtime in order to determine the causes that led to such an unpleasant event. Frequently, the data center administrator give more time just for backup and monitoring. And quite often the main cause of downtime is inadequate maintenance. A properly organized maintenance avoids downtime. We estimate that 30-40% of system outages are caused by equipment failure and can be prevented with an appropriate preventive maintenance.
How Much Maintenance In The Data Centers?
If the service is not part of the strategy of data center management, one of the methods that can be used to estimate the cost of your operations, is, first, to calculate the annual cost of downtime to your data center and the multiplication of the data by 30% or 40 % – this is the annual rate of downtime in which costs inadequate maintenance. Imagine that you could invest that amount of money in maintenance. Most likely, the investment increase the efficiency of data center, not to mention reducing the likelihood of downtime.
Evaluation Of Data Center Downtime
Exact figures on the cost of downtime, maintenance costs and return on investment in the service will depend on the configuration and the needs of the data center. For example, in a data center, which uses free cooling technology will be requiring less equipment service than in the data center, which mainly uses more traditional methods of cooling, for example, where the use of air power air conditioning is done. But with few exceptions, in the long-term, maintenance can be cheaper than a simple reduction in operational efficiency. But if the service equipment (IT, cooling and power distribution) costs too much, it would be better to replace it, but not to perform its maintenance.
What Requires Maintenance?
In one word – everything. However, one system requires less maintenance, other – more. For example transformers, power distribution units (PDU) and the air distribution system and water require little maintenance, while equipment such as modules CRAC, fire extinguishing systems, chillers and generators require a high level of service. Other equipment, such as uninterruptible power the next generation of uninterruptible power supply (UPS), may require only a medium level of service. But all the systems in the data center requires a certain level of service. It is equally applicable to all systems – dedicated servers, storage, networking and power equipment.
In some data centers there are some areas which usually suffer from a lack of attention to maintenance such as switching equipment, circuit breakers, ATS (automatic transfer switch) and the PDU, as well as critical system, such as UPS, batteries and equipment cooling systems, air conditioning and ventilation.
However, there are some tactics to quickly and easily identify problems in the system before they cause downtime. For example, you can use thermography, based on the methods of infrared scanning, which allows us to localize the sources of a number of problems.
Using the methods of infrared scanning can detect areas with unusually high temperatures, which can cause deterioration of the components, and bad electrical connections due to vibration, improper torque and other hidden problems. This allows data center administrators to find and fix the problem before it becomes a problem of availability of IT equipment.
The Use Of Computational Fluid Dynamics (CFD)
Computational Fluid Dynamics (CFD) technology allows the administrator of the data center to utilize the appropriate software, or by using a third-party service provider to model air flow and temperature distribution in the existing data center.
With this information, you can do the appropriate adjustment of the cooling system in order to minimize the influence of the “hot spots” and other thermal problems, which over time can cause damage to equipment and cause it to stop. Although the method of CFD can be costly, providers of services and software usually offer various options to operators of data centers, and the CFD does not have to be “everyday” type of service. This method can be considered as a method of optimizing performance.
Simple Maintenance Procedures
Simple maintenance procedures can prevent problems by paying attention to some of the commonly overlooked area. For example, commonly occurring problems such as the lack of free disk space can be easily prevented, but often they cause problems, depending on the location of the problem, it can lead to cardiac applications. In this case, such measures as regular monitoring and periodic checks of disk space may be sufficient to prevent other serious problems. In other words, not all aspects of the maintenance of the data center can be complex and costly, sometimes quite short, regular checks.
Maintenance should be among the top three priorities for the administrator of the data center. In other words, it must be a high priority. Of course, data center managers face many challenges, from the harmonization of management and technical personnel to the planning and monitoring of equipment upgrades and daily operations. However, despite all these responsibilities, maintenance is one of the tasks that should not suffer – and with a well thought out strategy and schedule maintenance tasks, and will not have to suffer.
Here is a broad definition of the zones, which should draw the attention of the administrator of the data center in the planning and execution of maintenance tasks:
Given that lack of attention to maintenance is the main cause of one-third or half of downtime in the data center, who can afford to ignore it? Often the maintenance of equipment may seem tedious and sometimes pointless exercise, and perhaps if it is run regularly, you deserve great recognition. Regular comprehensive maintenance can also increase the effectiveness of systems for the data center, and it receives the benefits that are not limited to factors such as stress reduction and cost savings associated with the data center in which there is a smaller number of unplanned outages. The benefits of investing in service is well worth the cost.
This article is about the development of data centers and cloud computing in the country. We have seen that extensive use of the cloud computing and big data is done for latest database technologies in other articles. We have read about measures that enhance the presence of the data center in our area, the fundamental basis to develop cloud computing services, reference is made to new energy efficiency Standards in the management of such facilities, and etc.
The realization of Data Center according to the criteria of efficiency gains in the following paragraphs that fits perfectly in the objectives of innovation, economic growth and competitiveness in the implementations identified in the strategy of development of the program.
This can help in the economic growth of the territory. A smart, because it will have a direct impact on the rapidly increasing services available to improve computer literacy, skills and inclusion in the digital world.
Four pillars of the Digital Market focus on the design of a Data Center in accordance with the best practices identified:
Digital Single Market: Bringing benefits in terms of access to content, the simplification of cross-border online transactions and helping to improve perception of the quality of the digital service.
Trust and Security: On the management of data, transactions and respect for the fundamental rights of reservation to privacy.
Interoperability and Standards: With the imposition, the adoption and diffusion of technologies and best practices reproducible for other contexts.
Research and innovation: Just as in the corresponding of investment objective of these technologies, it is a significant boost to innovation, thanks to the experimentation of solutions.
Special attention is also placed on the requirements for the reliability of the data center, thereby identifying four different levels of classification based on the ANSI/TIA-942 standard:
Tier I : Reliability equal to 99.671%, ie 28.8 hours retainer services in the course of a year. At this level, the cooling systems provide only one route of delivery and the components are not redundant.
Tier II : Reliability equal to 99.741%, or 22 hours of stationary services in the course of a year. At this level, the components have redundant paths but still single, data centers will be composed with raised floors, UPS and generators.
Tier III : Reliability equal to 99.982%, or 95 minutes of stationary services in the course of a year. At this level, the paths of power and cooling are multiple, but only one can be activated at a time. The capacity is sufficient to support the load when only one path is in operation.
Tier IV : Reliability equal to 99.995%, or 26 minutes of stationary services in the course of a year. At this level, all components are fault-tolerant, the cooling paths are multiple, independent and active at the same time.
Our goal in the construction of the next data center will be to reach the fourth level of efficiency, and score a relationship as possible between optimal data center power consumption and actual power used by the IT equipment (the so-called PUE, Power Usage Effectiveness).
In a sense, there is no concept like “small or big” if the company is largely dependent on its data center for its core business functions. Even small simple (time or scale) can be reflected on the income and reputation. You may be ready for a serious outage, but are you also willing to be less serious about downtime?
The Causes Of Disruptions Of The Data Center
Data centers are complex systems with a high level of internal communication, which require the proper functioning of a large number of sub-systems to the object could provide services. Unfortunately, all too often means that one seemingly minor mistake, accident or event can cause a sudden stop of the system.
Take button EPO (emergency power off): One employee is enough to take the button for the device to open the door, and the entire data center will be disconnected from the power supply. Wikipedia recently disconnected from the Internet by fiber optic cable break in the data center, and during the Olympic Games for a while to stop work as a result of Twitter’s system failure (and, interestingly, almost simultaneous failure of a backup system) in its data center.
From all this we can conclude that to stop the data center, it requires a lot less than a hurricane, earthquake, disconnected from the mains or a malicious attack. And if the company perform basic business functions (eg, retail sales over the Internet) is dependent on the data center, each minute of downtime equates to lost revenue. In addition, customers who come to your site or otherwise attempting to gain access to your services, get an error message, can simply turn to a competing provider or retailer, and in this case, you can lose not only a business transaction, and the client whole, that is all the income losses that he could bring. And, perhaps, customers will be indifferent to the break in service, most of them will not long tolerate the unavailability of services at a time when they need them.
“The Service Disruption Continuum”: The destructive events may not necessarily be a serious accident that will destroy your business. This may be a relatively small network card malfunction, or such a devastating event as a sudden regional disaster that not only destroy your data center, but also will damage the nearby roads, bridges and other infrastructure.
Preparations For Minor Interruptions
There is not a foolproof system. Things cannot refuse. Therefore, if we follow the theory of probability, in time data center, regardless of the level of safety, may be damaged. Of course, you should take all possible measures to prevent the stops, for example, set up systems and redundant components to avoid situations in which the failure of one element can become the cause of failure, but also need a plan of action in case of disruption. In many cases, the difference between “big” and “small” can be negligible downtime. In other cases, it may not be so.
For example, the failure of the system, in which the services are still available, but are loaded very slowly, may be no better, if not worse, than a full-scale failure. You probably know how unpleasant slow loading website is: you spend a lot of time, and then still in a rage closes the window. Therefore, the procedures for small outage may be similar to a plan of action for a longer outage. In any case, the main importance is the training, which will minimize damage to the business. Here are a few tips.
Advance Planning: Perhaps the most important step in terms of quick recovery after power – both large and small. If you start to make a plan of action, when they will be disconnected from the power supply, you’ll be at a disadvantage. Previously assigned a person that you want to be contacted in case of the event – and it may even depend on the extent of failure. Develop procedures to identify and correct the problem. Keep a list of service providers to which you will have to ask for help in the event of failure of some systems, such as uninterruptible power supplies. And, most importantly, organize all this information and place it in a location where it can be easily accessible to those who may need it. Planning ahead allows you to quickly resume operation of the data center – and business.
Backing Up Your Data: Often most people consider insurance policies as unnecessary costs that do not benefit. But when disaster breaks out, insurance policies, more than compensated. The same applies to backups of critical data. This procedure seems to be a waste of time and money before the data loss. Then back up is fully justified. However, regular backups should be created during the normal operation of the system. It will be useless or almost useless exercise, when the stop has already occurred.
Deploying A Management / Monitoring Data Center Infrastructure: Key short-term solution is to detect idle its causes. With torch and a multimeter you can hardly do it. You will need a central access to the information and status of systems, in order to enable you to quickly identify and locate problem areas.
Peak Usage Tracking Data Center: Periods of maximum or peak usage data center may be the best time to find potential problems before they cause a stop. Also at this time you should be most ready to stop.
Preparing to brief disruptions in like preparing for a prolonged disruption. Momentary interruption can have a smaller impact on the business, but the problem still needs to be addressed, so that it did not grow into a more serious problem. Smaller outages also may signal the existence of a more serious problem, which may eventually lead to downtime. But in any case, you should take steps to prepare for delays. They can occur in your data center, but if you prepare for them in advance, it will allow you to keep revenues and reputation in the eyes of customers.
Over the years, I have seen many data centers and this has helped in gaining knowledge a lot. Let me share with you three things I have always seemed really confusing the layman and for which I have always seen talking faces with surprises! If you have other suggestions as well, add in the comments.
Security For Entry Into A Data Center
A data center is a physical place that must first be sure: the server on the inside, as well as other devices are in operation and it cannot stop for any reason, this implies a set of rules related to safety. The data centers are monitored (or should be), often by armed guards, both outside and inside them: access is regulated very precisely who is accessing the badge is dedicated to the individual, and in the data center this is compartmentalized. The employee who is responsible for a particular service can access only a few rooms of the DC, not all. All inputs are registered.
In many data centers, automation plays a central role on the security front: remember that Level3 employs at NOC monitor the data center, everything is authorized, and so the cameras and scanning of the hand or a thumb to enter the rooms are controlled externally.
Why this level of security? The data and information contained in a data center is very valuable, in the case of financial data or government data, of course, this value increases, therefore, like a bank, a data center has very high safety standards.
The Data Center Is Interconnected With Several Operators And “Talk” With The Rest Of The World
Within a data center, usually converge various network operators (national and international). This is to ensure that the data center is connected with more lines towards the outside, and can then communicate through more operators: there are dedicated channels and the channels leading to the interchange points, for effective peering of bandwidth.
Depending on the size of the ISP that manages the data center and the nature of the DC (if precisely for an ISP or a provider of telecommunications) the scope of bandwidth can vary: from some hundred Mbit / s to several Gbit / s of capacity traffic to the outside. If you want to get an idea of the traffic passing, this chart below will help you. For a data center, the value can be reduced, say around 1 or 2 Gbit / s on average for an operator not too big.
The bandwidth used and carried out obviously has a cost to the ISP and then managed carefully, usually by a department specifically in charge of the network of data centers. The people of this department also deal with design and optimize the way in which that data center reaches certain foreign countries, there may be special requirements for customers who need to achieve low latency, some destinations in the world, and this often involves a lot of work acquisition of sections of other operators, the routing on your network.
The Higher Cost Of A Data Center Is Often The Cooling
For every dollar spent to feed their data center, we can say that there is a good proportion spent to cool the data center, obviously trying to achieve the best possible efficiency. The cooling is an important item in the energy consumption of one of the centers of the network: all server and other systems need to stay within a controlled temperature, which vary depending on the technology and the strategies employed by the operator of data centers.
If the temperature is high, could cause serious damage to not only dedicated web servers, but also to other apparatus, bringing in more severe cases the need to turn off these machines, which as said, must instead remain operational 24×7. What temperature are in a data center India? Depends as said by many factors, the average is around 32/40 degrees Celsius in summer, this value may change depending on the position of the DC and the strategies chosen by the provider.
ESDS Data Center considers generators as a key issue in the reliability of the data center. Alongside a line UPS on battery to an emergency generator should be a solution to be considered by all operators of data centers. The question has become increasingly important at a time when major natural events such as Sandy caused damage to power lines, followed by longer periods of power blackout. Below there are some questions to consider before, during and after the installation of a generator.
What we need to consider before installing a generator in a Data Center?
Type And Rating Of The Generator: The generator will be classified as power source required Optional standby?
Size Of The Generator: When choosing the size of a generator, it is important to consider the total power load in addition to the expected increase of the load.
Fuel Type: It will be a diesel generator or gas? There are pros and cons to both.
Installation Location: Where you will install the generator? Its installation is internal or external?
Requirements In Emissions: What are the emission limits of spent fuel in your area?
Execution Time Required: What is the expected execution time for the system of the generator? How much fuel it needs at hand to make the time to put into execution required?
Commissioning: What is your startup plan? What types of operations are required?
Testing Of The Load: How to log in after load testing? Have access to a bank of non-linear load with power factors appropriate to test the generator?
Revision: What will be planning the audit for the generator?
Service Agreement Of The Generator:
Preventive Maintenance: Preventive maintenance should be performed at least twice a year. If you see the installation of the generator as a critical task for business, it is best to plan a maintenance program with fixed deadlines.
Monitoring: How it is connected to the monitoring system of the generator? Who is monitoring the system for a possible failure of the generator and ATS systems?
Regular Testing And Maintenance: How often do you have to test the generator and maintain? How should this be done by testing?
Most of the western data center operators take the total capacity available for data center, it is then subtracted from the loss on the distribution of power and the power consumed by mechanical cooling systems, then reduce the result, at least 10-20% to protect against the risk of exceeding the maximum permissible value, and receive energy, which is allocated to the IT load. Such an approach may lead to the fact that the IT load is an excess of electricity.
The main problem is that almost none of the data centers run at full capacity and some even up to 50% of its capacity, since it is unlikely that all servers will be run simultaneously at full load. And, at some different times of workloads, even if the load of some services will reach 100%, we are often based on the fact that peak loads do not coincide at the same time. With this in mind, you can use more servers than the number available at your disposal of electrical energy.
This is the approach used in the airline ticket.
And, just like the airlines, which may overbook more passengers than seats in the aircraft.
Here are three ways to resolve problems with the growth of consumption in data centers:
The latest decision is a favorite topic of research, but it is almost never used in practice, because it is equivalent to solving the problem of selling extra tickets by placing the two passengers in one place. In some ways, it works but it is insecure and does not make customers happy. Option 3 reduces the amount of resources available for all workloads by reducing the overall quality of service. For most commercial organizations that cannot be a good economic decision. The best can be considered as options 1 and 2.
One of the classes of applications, which work hard to make energy-efficient, are interactive information-intensive workloads. Search the Internet, advertising, machine translation – are examples of this type of workload. These workloads can be very profitable, so the above option 3, option to reduce the quality of service cannot be economically justified.
The best solution for these workloads could be the calculations of energy consumption. In essence, the purpose of calculation of proportionate energy consumption – to ensure that the server is running with a load of 10%, could consume 10% power server running at full load. Of course, there are overheads, and this goal will never be fully achieved, but the closer we get to it, the smaller will be the costs and impact on the environment by using standard workloads.
The good news is that, in this direction we have achieved some success. When it was first proposed to use the calculations with commensurate energy consumption, many servers are in standby mode that can consume 80% of energy consumed by them at full load. Today a good server can reduce their energy consumption to 45% in standby mode. We did not come close to our goal, but make good progress. In fact, the CPU is very energy efficient by today’s standards, but the largest consumers of electricity are the other components of the server. Memory is a great opportunity, and mobile devices show us the limits of the possible. I hope we will continue to make progress, borrowing the idea of cell phones in the industry and applying them to the dedicated servers.
In Power management of interactive data-intensive services, a group of researchers from Google and the University of Michigan has studied the problem of power commensurate with the standard (OLDI) systems using these types of workloads as searching Google, advertising, and translation. These workloads are difficult because they provide the required delay time which is performed through the use of large modules built-in cache and when the workload is reduced, these machines must be in working condition to meet the requirements of the application to the delay. It cannot be an option of concentration of the workload on a small number of servers – the size of the cache requires that all servers have continued to be accessible, and therefore, when the workload will be reduced, all servers must be provided with a work load, so that the whole system could not go into low power mode.
The size of the cache memory data requires use of all of the servers, so when the workload is reduced, the load of each server is reduced in proportion, but in fact it never goes into standby mode. They should always be included and ready to handle these requests with the required delay time.
Provided by the CPU switches to low power mode may be the best and the only mechanism for balancing power and performance, but by itself it is not possible to achieve a commensurate power.
There is a need to improve the low power modes during periods of downtime for a shared cache and integrated memory controllers. There is a great opportunity to save energy costs of system memory using a low power mode [mobile systems today do well with it, so that techniques are available].
Even with batch requests, a translation of the entire system in a low power modes during inactivity cannot provide an acceptable balance between latency and power consumption. In the case of a coherent approach, the translation of the entire system to the active low power mode is the most promising solution to ensure a balanced power consumption while maintaining acceptable delay requests.
If we generalize the standard types of workloads (OLDI), the goal of providing the required delay is achieved by allocating cache memory which is very large between the running servers. When the workload is reduced from maximum to minimum values, all of these servers are less loaded, but they did not go into standby mode, and therefore cannot translate the whole system into low power mode.
I like to look at the servers that support these workloads, as if in a two dimensional representation. Each row represents one complete copy of the cache memory, distributed among hundreds of servers. One could serve a number of these workloads and successfully meet the required levels of latency applications, but a number will not increase. To increase the workload beyond that can be handled in the same row, additional rows will be required. When the system searches for a query, it is not sent to hundreds of systems, but only the servers in the same row.
This method of scaling at the level of the series gives an almost complete proportionality of the overall level of the data center, except for the following two problems:
If the workload is much higher than one series and predictably varies between the minimum and maximum values, this method of scaling at the level of the series gives very good results. It does not work, if the workloads vary greatly, or when you want to scale less than one series.
Storing files in the cloud is already a common practice among web users. Google, for example, allows you to save documents, pictures and other data from anywhere with an internet connection. So, start editing a Word document at home and bring it to work has become more simple: just send to a server and access it without having to walk in the company with pen drives from one side to the other.
In January, when the Megaupload was closed by U.S. authorities, many users felt hurt. Not because of the fact that copyrighted files were taken from the network, but because the service was also used to store information of legal users, much like the Google. Of a sudden, millions of documents and photos were lost when access to Megaupload was interrupted, and thus a cloud storage option was lost.
This event raised a question: how safe are the files that are stored in the cloud? Is there any guarantee that they will not be lost or if the cloud service is taken offline, as users can do to recover the holding?
For more sites and servers that can be closed, the cloud is hardly impaired. The cloud computing, as the internet, there is a system dependent on a single connection. The scenario that every cloud will disappear is practically impossible.
A single server can fail, but, according to the experts, the great cloud computing companies have plans for possible disasters. Large corporations have backup plans and disaster recovery plans in case of floods, earthquakes and other natural disasters. Thus, if more than one server is damaged, the files are stored on some other server as well, thus preventing users from losing their data.
Another concern is regarding the safety of the cloud. Not everyone feels confident when leaving important files with personal information stored in a place that they do not know exactly where it is, despite knowing how to access them easily. The biggest risk is not confidential data, but the possibility of losing data. If you rely on a single provider, you need evidence that they are keeping your data offline.
My recommendation is to make a second copy of your files elsewhere. It’s a good idea to save the data on your own computer, for example, to avoid losing important data.
As reported in an article published in the pages of the New York Times, research experts conducted an experiment in which it was planned to find out whether wireless devices can be used for communications between dedicated servers, data center in the future or not.
The researchers set out to find out whether the use of wireless systems will speed traffic between servers, data center , if the basic (cable) system is overloaded in the near future. Several server racks in one of the centers were equipped with small corporations directional antennas for wireless devices and switching, which were installed on top of the server racks.
Used for communications range of ultrashort radio waves: The frequency – 60 GHz, the wavelength – about 5 millimeters. According to the researcher, the detailed system of wireless communications has allowed to significantly accelerate the speed of communication between the racks – from 45% to 95% depending on the specific experimental conditions.
It is known that wireless communications are often not completely reliable. Communication may be interrupted, for example, by the inclusion of a microwave oven or by unfavorable conditions for admission to a particular point. So it is the fact that this idea will be accepted by most operators or the data centers. Nevertheless, the situation in case of data centers is radically different from the usual situation with unreliable mobile phones with unstable connections to Wi-Fi.
The fact is that, the entire situation within the data center is under strict control, all occurring processes are well predicted, and the equipment is serviced by staff for uninterrupted services. In addition, this system uses directional antennas, ie relationship between the switching devices is carried out through the narrow beams of radio waves.