About : ESDS Software Solution Pvt. Ltd.
Date Of Incorporation : 2005
CEO / MD : Mr. Piyush P. Somani
Recently, Mr. Piyush Somani – CEO ( ESDS Software Solution Pvt. Ltd. ) was interviewed by a leading Media Channel in India and was asked couple of questions in relation with the Incorporation of ESDS Software Solution Pvt. Ltd. and basic Idea of setup of Much Awaited Fully Managed Data Center in India.
1. What inspired you to come up with a Datacenter in India?
Ans. Our success story started from the United States in the Year 2003 and then moved to UK from the year 2007. American market became saturated in the later part, but UK market started picking up very well for us from the year 2007. We were competing against some world class Datacenters and hosting companies in the US and UK, where customer service and technical knowledge of staff members was extremely important. There was no Datacenter or hosting provider in India offering that kind of service, so we decided to start our owned Datacenters in India based on our experience and knowledge gained from our US and UK hosting Business. I am glad to say that this decision of our Management Team was absolutely right.
India is now going through similar growth in Datacenter and hosting Industry, which UK experienced from the year 2004. India will be the biggest cloud hosting center in the world by the end of year 2015. We would like to make most of this opportunity and make our customers feel proud to be associated with us.
2.How do you differentiate your Datacenter from the other Datacenters in India?
Ans:- We pride on our post-sales service and technology we have deployed for comfort of our customers. We have done tremendous research in last couple of years to offer automation to our customers. Though we offer fully managed services, but some basic automation helps customers to sort simple things from their web based control panel. Minor requirements like assigning more IP’s to your server, rebooting your server, adding or changing rDNS or forward DNS entries of your server should be available through the control panel. Automation of Backups and Disaster Recovery, real-time online upgrade of your CPU, RAM or Disk Space, online monitoring of your resource usage and live monitoring of servers and all services running on the servers is extremely important for customers. We have done exactly the same thing to make our customers make most out of our product offerings. Our eMagic portal is extremely beneficial to our large customers hosting multiple servers with us.
We believe in having complete transparency with our customers and our automation does the same thing. Customers are notified through email as well as mobile messages incase of any technical problem with their servers. Customers can configure monitoring alerts, disk status alerts and resource usage alerts from their control panel.
3.Why one should outsource their hosting requirements to you and not host their servers in their office or setup their owned Datacenter?
Ans:- People think that colocation centers and Datacenters like ESDS can eliminate need for in-house IT staff, but this assumption is absolutely wrong. Your in-house IT staff should be focused on advancement and automation of your process, but most of the times they keep on troubleshooting Hardware problems and system level problems on the servers you host within your facility. They could have utilized their important time in evaluating some new modules or features in your ERP / SAP / Portal / online banking system. Moreover, your investment will be huge if you go for in-house Datacenter or host your servers within your office. You may not manage to get bandwidth from multiple ISP’s which is extremely important for 100% Network uptime.
You may not find it feasible to go for N + N redundancy for your power and Network equipments. Uptime is extremely important for every online Business. Consider the loss of revenues if your ERP / SAP / Email server goes down for an hour on a busy Monday afternoon. Our Datacenter is a Tier III Datacenter and we have seen all sorts of problems which are experienced by Datacenters in last 7 years. We have designed our first Datacenter in a way to avoid all those problems which we have seen in our US and UK Datacenter facilities. Our first Datacenter in Nashik stands on Solid basalt rock which eliminated any sort of problem earthquakes, though Nashik has never had any major earthquake history. Our Datacenter is built on a higher altitude zone to avoid any sort of problems from floods.
All 3 ISP’s connecting to our Datacenter have their Fibers in ring network coming from different locations. Our power control panels have been designed in a way to avoid any sort of downtime due to failure of any single component at any time. We have dual door bio-metric access control system for our office and Datacenter building and we have at least 7 armed security personnel at any point of time in our campus to further strengthen the security of our Datacenter and office area.
4.I heard you say, your Datacenter is the first Green Datacenter in India. How do you define a Green Datacenter?
Ans:- We have taken special care to reduce our sensible heat load with the help of Panasia Engineering from Mumbai. Our Datacenter floor is sandwitched between our Office floor and the floor on top reserved for expansion. We have dual cavity walls for our Datacenter, with Vermiculite plaster inside-out and Fireproof insulation between the dual cavity walls to further reduce sensible heat load. Our walls have been built with fire-proof and heat resistant fly-ash bricks and the Floor height is 14 Feets. We have STP and Rain water harvesting system in our campus and we plant 2 trees for every hosted server during rainy season. Tree plantation helps us to nullify our carbon footprints. We already have 30 fully grown trees in our office campus and the local corporation will be providing us with space every year during Monsoon for plantation of thousands of trees. ESDS will also maintain this trees to ensure they grow big enough to survive on their own for future.
5.Do you think Indian Datacenters can compete with the Datacenters in the West?
Ans:- To be honest, Indian Datacenters won’t be able to compete with the Datacenters in the US or UK till the end of year 2011. Bandwidth prices are too high in India and the existing prices need to come down by more than 70% to help Indian Datacenters compete with the Datacenters in US, UK and Netherlands. Every office in Netherland is connected on 10Gbps port and every home is connected on 1Gbps port. Here in India, most of the large corporate offices with more than 500 staff members still run on 2 Mbps or 4 Mbps bandwidth. India ranks 133rd in the list of Countries with better Internet connectivity. People need to change their mindset and the ISP’s also need to stop getting concerned about their revenue growth for another 1 year in this transition phase. There’s a major problem with the way Indian customers think. If you offer 10Mbps as free upgrade to a 2 Mbps user, he will consider to downgrade his plan to 2 Mbps on new pricing and save cost on Bandwidth.
Large organizations need to calculate how much time their staff members loose in when it takes 2 – 5 minutes for them to load a website or access their mailbox through webmail. TRAI should revise the bandwidth pricing asap, as the biggest government organization having capability to bring revolution in India’s internet growth has backed out from the competition. If BSNL can’t offer bandwidth to Datacenter’s or large corporate offices, then things will take very long time to improve in India.
6.Where do you find ESDS in comparison with other Indian Datacenter?
Ans:- We are not here to compete with any other Datacenter. Our product offerings and targeted customer base is completely different from what other Datacenters are around. We are mainly focused on offering Fully managed services to customers who wish to have trouble free hosting experience. We manage and monitor servers pro-actively to help our customers stay focused on their Business. We have fantastic relationship with all other Indian Datacenters, as we host our DR sites with some other India Datacenters and they host some of their DR sites with us. Demand for Datacenters is growing day by day in India and the available data floor space right now will get completely filled within a year’s time. Countries like US and UK have got more than 100 times Datacenter space than India. We are nowhere in comparison to the developed countries and the Internet revolution is yet to come in India. Companies like Theplanet, Telecitygroup, Phoenix One, NAP, NGD group, Google are yet to come up with their Datacenters in India.
India ranks second in Google traffic rank, we give maximum unique hits to google websites, but google has not come up with its Datacenters in India, as there is no special provision made in India to differentiate local traffic and international traffic. UK has got LINX ( London Internet Exchange ) which helps the local ISP’s and Datacenters to exchange their local traffic, unfortunately our concept based on similar model hasn’t clicked well so far. Private companies should be allowed to setup their Internet Exchange hubs to promote exchange of domestic traffic at nearest point.
No internet exchange center in India can succeed if domestic datacenters are not allowed to become members. London Internet Exchange has been the most successful internet exchange hub in the world and there’s no harm in following same model as it is within India. We will need dozens of Internet Exchange hubs in India as our internet network is far more complex than any other country.
7. Where do you see ESDS in next 5 years?
Ans:- ESDS will become the biggest Datacenter service provider in India by the end of 2015. We are mainly focused on offering managed services and we won’t like to transform our company into a power supply company charging customers for their power usage. We will come up with our Datacenters in many different cities of India, Indore and Delhi are on top of the list of probable cities within India. We have plans to come up with a Datacenter in UK as well, but the actual work will start only from year 2011. Our in-house R&D team is coming up with many unique ideas which will bring revolutionary growth for our company. No matter where we reach in next 5 years, but we will always remain a customer service oriented company and our customers will always feel proud to be associated with our company. Our list of first few customers includes the Generators, UPS and Precision Air conditioners suppliers who contributed in making of our Datacenter. They chose to consider ESDS ahead of all other Datacenters they have worked with, this proves how ESDS is different from any other Datacenter in India.
8. Why did you opt for Nashik?
Ans:- Nashik was the place from where I started my outsourced hosting support Business in the year 2003. Staff availability was fantastic and the number of Engineering and MBA colleges went on increasing every year. Best thing about Nashik is the staff retention. It is extremely important to retain good people in a hosting Business. Datacenters in other cities have high churn, which makes an impact on their service. I always believed in improving service level to stay ahead of the competition; this was possible in Nashik only. Another good thing about Nashik is the moderate climate. Nashik has never had any history of major earthquakes and we’ve never had flood problems like some other metros in India.
9. Security is vital in this industry. Is esds geared for it?
Ans:- Yes. ESDS is the ultra-modern Datacenter in India with the best technology deployed for internal as well as external security. Our team of System Administrators and Network Administrators has expertise in security of servers, addon software’s installed on servers as well as Network. We have deployed Cisco Firewalls and Cisco Anamoly Detection Guard for DDoS prevention and suppression of malicious traffic. Our Datacenter has got 3 layer security to ban access of any unwanted person or a group. We are hosting some large organizations from India as well as other countries; ESDS understands importance of security for Business of such customers.
10. What are the major services that you offer?
Ans:- We have expertise in Managed Hosting and Datacenter services. We offer Dedicated Server hosting, Hosted ERP and Hosted SAP service, Cloud hosting on VMWare and Microsoft Hyper-V platform, Disaster Recovery solution on IBM Tivoli and R1soft(backup and DR solution), Hosted Email service on Zimbra and MS-Exchange, Global Load Balancing, DNS Load Balancing, Mirroring service, Large data storage hosting upto 5PB capacity with 100% data protection guarantee, Hosted Banking service and Colocation service. Our 6 years experience in this field has helped us to know each and every kind of problem that can arise in a Datacenter.
We have seen many different technical problems in our Datacenters in UK as well as US. We designed our Datacenter in a way to avoid all such problems which can trouble a Datacenter. Uptime and world class customer service differentiates ESDS from other Datacenters in India.
11. Cloud Computing is the buzz word. What is esds role in this segment?
Ans:- ESDS had introduced mirroring and cloud hosting service in the year 2006. We have expertise in cloud computing technology and we are the only company to offer 100% uptime guarantee on cloud hosting service. Our cloud offerings have become extremely popular in last 12 months and today we are leading this technology on VMWare and Microsoft Hyper-V platform. CEO’s of all big companies including Microsoft have predicted India to become the biggest cloud hosting center in the world by the end of year 2015. India has huge opportunity to lead the hosting world and ESDS would love to play lead role in promoting cloud hosting services.
12. What about your technology expertise?
Ans. Most our senior system administrators and network administrators have been working with us from last 4 – 5 years. Our staff has improved a lot in last couple of years and we understand importance of adapting to new technology. Some of our new staff members who have joined recently have come up with many new concepts, which has helped in growth and increasing internal competition. I am very much impressed with the technical expertise of some of our new Engineers we hired recently.
We all are improving our technical knowledge to stay ahead of the new generation looking to get ahead of us. You must have seen kids operating computers better than you at home, same thing is happening in IT industry as well. Future generation is blessed with technology and the only way for us to survive against their challenge is to be more disciplined and smart.
13. Who are your major Technology Partners?
Ans. Microsoft, Intel and IBM are our major technology partners in India. Dell, cPanel, VMWare and Parallels are our major technology partners in the UK and US. Cisco is our sole partner for Network hardware and technology, while TCL, Reliance and Airtel are the premium partners for our Internet backbone. We have started relationship with some new partners which includes Zimbra and R1Soft(backup and Disaster recovery solution). I would like to appreciate support of all our partners in our growth. Our technology partners have helped us a lot to grow at this rate, even though we compete with most of them in hosting market.
14. A lot of greenery is visible around the office including many ponds and a swimming pool. Any particular reason?
Ans. We are committed to make our Datacenter carbon neutral. All our Servers in UK, US and India are using more than 4MVA power. This has been a major concern for us. We have planted only 1500 trees in Scotland and 500 trees so far in India. We need to plant 10,000 trees more to offset our carbon emission. We are committed to plant 2 trees for every server we host in India. We have plans to implement a unique cooling solution in next phase of our Datacenter in India. Heat generated in the next phase will be passed on to the 6 Lakh liters’ of water we have in our campus. Water has a unique character, as it can hold twice heat as compared to solid mass and it takes long time to heat 6 Lakh liters’ of water. Evaporation process starts on the surface of water once water reaches 28°C temperature on its surface. Ambient temperature in Nashik always remains below 24°C at night, which helps in cooling the water without losing much water due to evaporation. It took 2 months in summer to raise the temperature of 6Lakhs liter of water up to 27°C, I am sure this unique technology we will deploy in our next phase will set a revolution in Datacenter industry. Datacenters near rivers or sea can easily deploy this technology and help our country by reducing their power consumption for cooling.
15. Does participation in conferences and expos help?
Ans. Yes. Business in India is done in a completely different manner compared to UK and US. Indian customers need personal approach, while English and American customers don’t give much time to meet nor they come for visits. I was reluctant to send teams in expos, but our BDO insisted on doing so and we are proud to have participated in expos in Mumbai, Delhi, Pune and Bangalore. Most of our Business this year has come from our participation in these expos. Personal approach is key to success in India. ESDS has started its branch offices in Delhi, Mumbai and Pune to have its local presence in these important cities. We are starting our office in Bangalore and Cochin to focus on the hosting market in southern India. Our approach towards new customers is very simple, we guarantee best hosting service to our customers and most of our customers never compare us with other hosting providers.
16. What are your expansion plans? Do acquisitions feature in them?
Ans. We have plans to start 2 new datacenters in India and one in UK. Delhi and Indore are the preferred locations in India. Our UK datacenter is coming up in partnership with “DADA.Pro” and “Bluesquare Group”. We have no plans for any large acquisitions as of now. Our source of funding is our profit and we have not considered any bank loans, FII’s or equity funding as of now. Our R&D team is working on launch of a unique product which will make ESDS the No.1 Datacenter in India.
Incorporation Date :
Novice users do not immediately think about such an important topic which is Internet traffic. They hardly know about this thing because such combination is necessary to pay only for time spent on the Internet, but not for such download.
This is understandable – that it can actually pump up a dial-up with the speed limit. Telephone line is not intended for downloading files-movies, music albums. Even the short songs dial-up pump difficulty, it is not the phone volume.
Another thing – a dedicated line or district network.
When a home user is considered as a dedicated channel – it immediately goes into several other consumer categories. Because now the consumption of the web is growing in several orders. What do you think how much download does a simple internet user do, on his computer when the Internet comes through fast local area network with the speed of about two hundred megabytes.
And that if he does not swing movies, is not fond of music in MP3 format (more accurately, the user is not fond of downloading stuff on to the computer from network) and not a big fan of all graphics and video. Two hundred megabytes – the usual thing, and it is all about the most common sites and the most common quantities.
In this case, Internet users already have to think about the fact of such traffic, and most importantly – how much of this traffic is spent. Because the monthly fee service provider usually includes a limited number of like prepaid, or rather free megabytes – it all depends on the tariff plans.
But very often it happens that the user does not just exceed the limit amount, and goes far beyond it. And then the user begins to treat traffic gently, especially figuring in how much does this traffic costs.
Do you think when you place your site on the Internet, you will not think about the traffic? Alas, in this case you have to remember the traffic. First, you should be done with a website and have a hosting package for your website.
If you think that you are not going to get massive traffic for your website and you pay for server hosting package that is not going to pass on any benefits then you will also need to develop your website, right?
And then you start on working over it, first, work hard, create new content, and besides, try to fill a site with different file contents.
Gradually the size of the site grows, it gains popularity, respectability, increase in attendance and downloads. And then the provider tells you – “Stop!”
This would mean that there is a lot of load on the hosted website, and your page traffic does not fit within the limits of hosting package, for which you pay two hundred to three hundred bucks a month. You have to think about that, or rent a server from your ISP, or just install your own server at your ISP (called colocation hosting). And then the term TRAFFIC confront you with all its inevitability.
Because in this case, there are same problems as like in the case of a dedicated internet network: the monthly fee includes some traffic (it is measured in gigabytes, because it is still on the server, not on your local computer) but as a rule, if free traffic per month is exceeded, you have to pay for it.
Some hosting providers prefer to pre-negotiate at such conditions or collect money for all traffic to avoid disagreements and recriminations.
Then you have a question – why, actually, you have to pay for traffic that uses your site? The users who visit your resource, pay their internet connection provider. And you pay for the traffic that comes from the users computer (on your site).
Then why you must pay for this traffic? It turns out that providers receive double payment – in the sense that the users take the money because they swing the information on your website, and you get the money for something that users pump data from your page. Two times to take money for the same thing – Is it fair? – You ask, and in this holy wrath of pure truth is almost one hundred percent.
Interestingly, such a paradoxical situation has actually occurred:
Providers have the money for the traffic, from users and site owners. Of course, one could say that users only pay for the time spent on web, and the website owner pay to keep their website up and running 24/4/365 days. So should not the creators of sites pay for traffic? I am deeply convinced.
Setting up a data center is a complicated and lengthy procedure. Would that help to standardize the process and reduce the risk of unreliable equipment and eliminate the need for additional investment in the elimination of defects?
According to the research, the world average volume of user data per one company that is 120 terabytes. In turn, estimated at IDC, last year for one hour in the world, dispatched 35 billion messages. Given these numbers, it turns out that one message has approximately 3.4 MB of information. Of course, most of the e-mails are not accompanied by voluminous reports. But not so little attachment contains a sample from corporate databases or presentations with graphic slides and video. The size of this investment – a few tens of megabytes. For the preparation of reports they utilized powerful computing resources. In addition, the database itself are handled by dozens of different kinds of applications, covering all aspects of the company.
Intensive data streams circulating in the information systems of many companies require a particular organization of the IT infrastructure. It must adapt the changing business requirements and, in particular, to ensure a steady increase in productivity of the decisions and maximize the effectiveness of their operation.
Concentration of resources
Solution to this problem may be the concentration of computing resources and their distribution functions between applications. This approach is known as “virtualization”. “Virtualization of resources” again sparked interest in the concept of centralized computing. The result of the implementation of this concept are data processing centers (DPC). Data Processing Center (DPC) – a group of premises whose main function is to place processing equipment and storage.
The option of data center represents all the leading manufacturers of computer and communications equipment, as well as software, which organizes the operation and management information systems. Typically, these solutions are used by the latest development companies. In turn, should pay special attention to IT infrastructure, within the centralization of computing resources.
Illustrating the need for a universal and productive infrastructure for data centers can be similar to the automotive industry.
The first cars on the design repeated the horse-drawn carriages, and only after some time they worked out universal principles of layout nodes, which remain unchanged. On the basis of these principles they kept improving the design of the car continuously, whose goal was to create an optimal functioning and is also more economical models.
At the same time, the vehicle, which is a cart on which the most modern car engine, of course, would not meet the performance characteristics and economic indicators. So with data centers are. Approaches for building the infrastructure of computing centers of the sixties and seventies, or even for server rooms, do not correspond to the equipment in the modern data center.
Standards for data centers adhere to the basic principle of constructing cable infrastructure topology “hierarchical star
Thus, there is need to develop a special standard that defines the design and construction of infrastructure for the data center. The modern concept of DPC is based on the definition of its functionality and services, implemented an information system within the enterprise, or services provided by the user.
In accordance with the purpose of data centers, they can be divided into two groups. First – it is private (or corporate) data centers that operate exclusively within a particular company. The second group includes common data centers, often owned by the suppliers of Internet services and are used to implement services such as web hosting, colocation hosting, rental applications, the deployment of e-commerce, etc.
But those and other centers primarily represent premises, which set the computers and network equipment and creates conditions for continuous and reliable operation of this equipment, and storage systems. Among the obligatory prerequisite for the normal functioning of the DPC include the presence of uninterrupted power supply, maintenance of temperature and implementation of network connections within the data center, as well as connections to the networks of telecommunications service providers.
Currently, there are two standards defining the principles of data center infrastructure. It was developed in the U.S. TIA/EIA-942 standard and European standard EN 50173-5. Both standards contain many similar provisions, but the scope of the American standard is much broader, since it determines not only the features of the organization cabling.
Number of European standard says that he belongs to a group of cable standards, and the number after the hyphen indicates the application in the respective areas. Thus, EN 50173-5 – is a European standard, largely determines the cable solutions for data centers.
Standard TIA/EIA-942 considering several options for building
American Standard considers the structure as a whole and contains not only general guidance on the organization of cable infrastructure, installation, mounting fixtures and identifying sites for laying cable. It also focuses on the design of the network, providing access, rules of placing data center, the architectural features of buildings, the organization of power, lighting, climatic conditions, to ensure the smooth operation of the equipment, fire safety and protection from moisture.
An important component of the standard is the requirement to ensure high operational readiness of equipment in the data center required to service requests that come from a large number of users. Given such a broad scope of issues related to the implementation of infrastructure data center, will continue to be considered primarily an American standard.
Approved in April 2005, the standards bodies, the U.S. standard TIA/EIA-942 Telecommunications Infrastructure Standard for Data Centers defines the requirements and the basic rules for the design and implementation of data centers and server rooms.
The original “starting point” of the standard is beginning design work prior to construction or reconstruction. Only at this stage we can fully appreciate all the architectural features of data center premises and ensure coordination of all technical systems. Therefore, guided by the standard should be primarily the designers, because they have to plan for the relationship building’s architecture, its technical systems and cabling infrastructure to the operation of a large number of computer equipment with a high density layout.
List of main components
Standard TIA / EIA 942 provides the mandatory provision of specialized facilities and organization of work areas. In particular, this space for liner external telecommunication systems (Entrance Room), computer hardware (Computer Room), telecommunications equipment (Telecommunications Room) and accommodation for engineering systems, eg, electrical, industrial space conditioning and ventilation systems, etc.
To monitor and manage data center (especially the center, which is mission-critical) is organized by the center of the current network management (Network Operations Center – NOC). Its function is to identify faults and develop action precluding such effects, as a possible simple computer equipment. In the NOC located equipment, it monitors the thermal regime, stop and monitor equipment malfunction with subsequent diagnosis of modules and blocks, which are out of order.
In addition to space for the installation of computer equipment in the building of a data center can allocate space for offices and support services, such as customer service centers or service training of data entry. These facilities include the switching points of horizontal wiring for office and support services.
The room for computer equipment allocated area of basic wiring (Main Distribution Area – MDA), horizontal wiring (Horizontal Distribution Area – HDA), zonal routing (Zone Distribution Area – ZDA) and the domain routing equipment (Equipment Distribution Area – EDA).
Possibility to add external systems (ER), MDA, HDA, ZDA and EDA although in many respects and are consistent with but not identical premises and areas specified in the standard TIA/EIA-568-V.1 (Entrance Facility, Equipment Room, Telecom Room, Consolidation Point and Work Area). In the European standard, EN 50173-5 are used with other names of elements of cable infrastructure. External telecommunications services are connected to an external network interface (External Network Interface-ENI), which connects to the main switching center (Main Distributor – MD) through a network access subsystem (network access cabling). In the zonal sub-system by wiring to sockets equipment (Equipment Outlets – EO), either directly or through the connection of local distribution points (Local Distribution Point – LDP). However, LDP is an optional element.
Location of premises and areas determined by the size of the data center, as well as the ability to install additional equipment and transition to more advanced communication technologies.
In the premises for the liner external telecommunication, systems are interfaces that connect structured cabling data center routes a group of buildings, as well as with cable equipment suppliers of telecommunications services. This may be a separate room (standard recommends that you make a separate room for security reasons), but allowed the union and with room for computer equipment. In general, the premises equipment, providing input of external telecommunications services, consolidated in the MDA.
In the data center there may be some room for liner external telecommunication systems, which allows to observe restrictions on the length of lines, as well as implement various service units. Field of basic routing (MDA) – a location of main switching center cabling data center. MDA is the most suitable place to install the routers and switches, the core of the local area network data center and network storage. In addition, this region can be integrated distribution points horizontal wiring servicing equipment in the immediate vicinity of the MDA.
Areas of horizontal wiring allocated for the implementation of distribution points horizontal subsystem, cable lines that reach the area of routing equipment. Therefore, HDA is considered as the location of switches, local area network and network storage, as well as KVM-Switch (allow you to control multiple servers through one set of “keyboard-video-mouse), which serve facilities in the respective EDA.
Additional switching centers zonal cable equipment, which correspond to the region zonal wiring with this optional element. They are placed between the HDA and EDA, where necessary frequent reconfiguration of cable equipment, or used as a means of providing additional flexibility in horizontal solutions. The horizontal cables that are suitable for ZDA, in zonal outlet or consolidation point. Further arrangement by means of switching cords.
In the zonal wiring, it is not recommended to install the active equipment, except for decisions on the organization of power in the twisted pair.
In the field of EDA implemented network, connections are needed for computer equipment.
Allowed additional connections between the HDA (including the purpose of redundancy) or connect to cable equipment rooms liner external telecommunication systems (when released a few such facilities).
Standard TIA/EIA-942 considering several options for constructing the cable infrastructure. The basic topology is best suited for any data center – and for corporate, located on the same site, and for the center of common use, dispersed across multiple platforms.
In distributed data centers, there is a allocated several rooms for liner external telecommunication systems. This is done for security reasons or if necessary to cover large distances in comparison with the maximum communication range.
The simplified topology of the data center involves the combination of HDA and MDA. In smaller data centers, these switching points can also be combined with equipment TR and ER. This system allows for increasing the length of the horizontal line on the optical fiber to 300 m.
In the data center with a centralized topology is realized an optical system with a central administration. In this system, all the electronics concentrates in MDA and the EDA, and the horizontal subsystem is missing.
Cable Equipment Data Center are the horizontal cabling, backbone wiring and switching equipment in the relevant fields.
Backbone wiring connects the Entrance Room, MDA and HDA. The horizontal wiring – is part of the SCS from the point of termination to the EDA to the horizontal distribution point in one area of HDA. As the backbone wiring actually aggregated traffic coming from the horizontal lines, it must have the appropriate bandwidth.
The implementation of horizontal and backbone postings, according to the standards implemented for the data center on a constant basis for SCS hierarchical star. It allow only one level of hierarchy for backbone wiring, which implies that one distribution point.
However, the standards provide redundant wiring. For this purpose we introduce the region of secondary wiring, switching point which is connected to the distribution points of the horizontal wiring, duplicating compounds with the basic wiring.
Standard EN 50173-5 includes the use of cable equipment to twisted pair and optical fiber. The minimum capacity of copper systems is determined by the class E. The minimum capacity for optics – Class OF-300, implemented fiber OM-2 or OM-3. The standard provides the length of optical fiber cable channels in the OM-1 OM-2 and OM-3.
TIA/EIA-942 Standard provides for the application of cable equipment, switching equipment and wiring cables in accordance with AN-SI/TIA/EIA-568-B.2 and VZ. This is a transmission medium of a twisted pair to the characteristic impedance of 100 ohms, a multimode fiber (62,5 / 125 or 50/125) and single-mode fiber.
In addition, a number of applications allow the use of coaxial cable with impedance of 75 ohms.
Process Longevity for the solutions that are implemented in the newly developed projects, provide some types of cable. In TIA/EIA-942 recommended twisted pair is sixth category and optimized for laser fiber. Given that at the time of TIA/EIA-942 absent standard twisted-pair sixth category is possibly an indication of the type of cable that is preferred for use in data centers.
The length of the cable channels (with patch cords) in the horizontal wiring for cable of any type should not exceed more than 100 m. For the solutions with a singular line, in which HDA is integrated with MDA, the length of optical cable ducts can reach 300 m.
Standard TIA/EIA-942 puts forward a number of requirements and recommendations concerning the organization of cabling. All computer and switching equipment is housed in enclosures and racks.
Raised floor has become an architectural foundation for building modern data centers (DC), but their use is not limited only by machine halls of data centers. Easy upgrade, as well as the ability to quickly add or remove cables, provided with modern modular system makes attractive use of raised floors and other premises.
Today, designers seek to establish Raised floor in different rooms. Raised floor makes it easy to upgrade the installation of various equipment, including ease of installation of new redundant cables, redevelopment of premises for other needs and contributes to their popularity among end users and customers. Raised floor has a broader consumer market. Over the past few years, the SAC upgraded quite significantly, and the construction of raised floors has remained unchanged. Raised floor makes sense to use when you need to run a large number of cables. They are more efficient and much less expensive than systems mounted near the ceiling, using which we must remember that the temperature near the ceiling is higher than the floor.
Instead of pulling cables from the ceiling, installers makes it easy to run cables under raised floors. Existing solutions in this area provide efficient cooling, reducing the number and extent of hidden cables, consolidation of physical ports, and fewer cords to connect the equipment.
Raised floor is a floor consisting of removable tiles that are installed on the supporting structure, because of which, the flooring and floor-base can fit for different purposes.
Raised floor space used for cabling
Among other things, under raised floors form ventilation system that provide cooling wind direction where it is needed. Raised floor is ideal for data centers, where ever there is new equipment or it is rearranged from one place to another, it is easy to reinstall. It also ensure proper maintenance of cable management: The cable is pulled under the floor, and it can be easily accessed, they are also easy to upgrade or shift.
Raised floor can be installed if the customer is implementing or planning to implement new technologies that requires reconfiguration of existing equipment or if he would need in the future. One of the main characteristics of raised floors is quick and easy access to cabling. You can quickly raise the deck and get access to the underground space.
The design of raised floor and the types of raised floors :
Pedestal raised floor construction Floating (movable) raised floor.
In the pedestal design, it support removable tiles that are used at column-fixed-height pedestals. Raised floor tiles are usually made of steel, aluminum or wood treated with flame retardant.
Flooring should settle at a height of not less than 150-300 mm from the floor base (concrete floor).
“Float” design should be built at a height of 460 mm from the floor. Vertical seismic load is absorbed by the dampers and springs that are installed inside the support cylinder. With regard to horizontal displacements, they are neutralized with steel shoes that are Teflon-coated and they are wore at the base of cylinders. Shoes can skim the surface of tiles and can be clamped to the floor. In the typical raised floor systems, they use Slub tile of 0.6 x 0,6 m and 0,36 square area.m size.
In order to supply cooling air to the places of high heat in front of the enclosure, perforated tiles are installed. Tiles for service under the raised floors systems can be removed from the floor and they can be shifted to other places.
Close to the particularly strong heating equipment, the devices that guides the air flow into the holes of perforated raised floor tiles can be installed. When the premises have to change something, tiles are removed from the floor and they are moved to another location.
In the initial arrangement of the room they all want to ensure optimal, as it seems, the balance of computer and communications equipment. But people often do not know at what point they will need this technique in the future, therefore they do not know what changes need to be made in the scheme of its deployment. It is therefore desirable to have a solution that allows you to quickly cost-shift the equipment, introduce new technologies to replace the equipment. If you are completely sure that you will never make any changes and do not need to introduce new technologies, raised floor is not for you.
Raised floor is usually associated with data centers, since a large amount of equipment is much easier to organize the flow of cooling air under the floor rather than from above. But the most difficult problem in data centers is the organization of cooling and access to cables. Often in the field of concentration equipment arise where equipment is overheating, so-called “hot spots”.
This problem is only exacerbated by the extensive use of blade servers. Typical data centers built before 2000, calculated on the heat, is equal to 5 kW per rack. Now, in today’s data centers, this figure could be range from 7 kW to 35 kW per server rack, or even higher because there are a lot of data centers who place more servers in small area, which give correspondingly more heat, so they often have hot spots.
Design Problems with Raised Floor
When you install the raised floor system, designers face a number of standard tasks, including – to set the tiles, to ensure reliable grounding and protection from electrical interference. In areas that are in front of places of racks, stacked perforated tiles. These metal surfaces should not be open, we must exclude them into direct contact with installed equipment. Metal tiles and raised floor is a supporting structure and must be well grounded. Openings for dragging cables to avoid mechanical damage to the latter should not have sharp edges.
When designing the raised floor, you have to make sure that:
* You have a sufficient powerful air conditioning system for cooling all the equipment;
* You have to consider the limitations of mechanical load on the raised floor and concrete floor;
* You should check that there is enough free space to install the equipments available;
* You should address the needs of areas for future development
When the layout of equipment will be ready on the floor, arrange appropriate stencils surfaces for installation of equipment and markup the input and output of cables.
With the development of cable technology and expanding the scope of raised floors, designers are facing new challenges. Twenty years ago the premises of data centers were the largest of those premises, which used Raised floor.
Now raised floors are installed in huge halls like a casino, library, “clean rooms” for different industries. The use of raised floors allows customers to abandon expensive ceiling mounted systems, cables, besides, it does not require punching holes in the building constructions to liner cables.
That is what made Raised floor an attractive technical solution, and brought them outside the computer rooms of data centers and server rooms. Now the technology of raised floors, which was developed mainly for the latter, is largely growing demand from owners of office for general purposes.
Choose the flexibility to install
The modern office environment is characterized by frequent change of place. Update equipment and implement new technologies quickly and conveniently.
In the 90 years of last century, the main motivation of the raised floor is to access the power cable, telephone and computer system as fast as possible. Today more and more users are turning to such solutions for arranging other facilities. They clearly understand all the benefits that are promised by improving the quality of ventilation and air conditioning, as well as the ability to quickly rebuild the cable infrastructure, which in turn leads to a reduction in operating costs.
The increasingly widespread use of raised floors is logically correct for the tendency of enterprises. If you need to update equipment or alter the jobs in offices, but there is no raised floor, you have to deal with time-consuming and expensive process of transfer of cable lines. It is often required to drill floor and walls for dragging cables in new places, as well as withdraw hung ceiling for shifting air ventilation and air conditioning.
But if you have a raised floor, you only raise tile flooring, change the routing of telephone cables, data transmission network and power supply, then put the tiles on the space, enter the new office furniture or rearranges old , plugging the plug – and you are done. It would be unfair architecture, if you designed a raised floor and not drawing attention to other systems, an organic combination that provides a cost-effective operation of the building.
There are many different systems and therefore, in order to regulate costs, it is important to choose the right product. In order to present the conditions that make the building cost effective, they need to use flexible systems that can be reused or reconfigured.
Complex design issues with raised floors
As the scope of raised floors beyond the data center, it complicates their design and installation. If there is a need for reorganization or modernization, we propose that a system should be able to reconfigured. Additional difficulties are technological change and cable preferences.
In the data centers, raised floor can be installed without problems. But when installing such floors throughout the building shifted doors and elevators. We need to reconstruct the entire building to install the raised floor, and sometimes it can be done, and sometimes it can not.
The challenges of providing customer service and the restructuring of structured cabling systems (SCS).
As Raised floor is used in other areas of the building, installers are facing with the challenge of providing technical services and reorganization of the SCS in the office premises. One possible solution is to use raised floors and quickly plug cable systems, which reduces the time of the initial arrangement of office space, and enable rapid restructuring of jobs.
We need to know how long it takes for a permutation, the introduction of new technology, or changes to reconfigure the equipment.
Contemporary Raised floor must be equipped with removable grille, which could be moved during the permutation, extensions or changes. These systems must have strong support structures that will support the floor tile raised floor and provide easy access to cable household.
In the design of cable channels of the office premises, it is important to consider different outer diameter and the physical characteristics of pull cables. We need to lay cables in a certain way, because we can not just throw the cables on the floor. The outer diameter of standard Category 6 cable is much larger than the diameter cable category 5e. Shielded Category 6 cable can have a diameter of 0.251 inches, and unshielded cable enhanced category 6A could have a diameter of 0.315 inches.
Interferences under the raised floors and electrostatic electricity
Another problem is the pickup, when the cables are twisted and the pair is laid under the floor. Static electricity is also produced in the underground systems. These problems may occur because of the abundance of microscopic whisker crystals of zinc, a side effect of the molecular pressure. Filamentary crystals of zinc are produced by friction on the metal surface, such as frames for raised floors.
They break the electrical contact, conduct electricity and can disrupt electronic devices, entering the electrical components and causing a short circuit. If the standard protection system could not deal with these electrical problems, it is necessary to take appropriate measures for grounding raised floors.
Data Center Processing is a complex engineering structure, intended for centralized deployment and maintenance of data center equipment.
There are many external factors that are affecting the quality of resources. Especially important and noteworthy is the negative impact of the environment and the human factor. To avoid this problem, the wide range of high-tech engineering solutions are used.
With Various solutions and methods of designing data centers, each in its own way is correct and usually corresponds to the norms and rules adopted in the construction and telecommunications. In this article I want to share the operation of data centers which includes power supply, cooling system and security.
Uninterruptible Power Supply
Data Center is one of the energy-intensive industries. Modern equipment, mounted in a 19 – inch rack unit height of 42, can consume up to 3 – 4 kW of electricity and provide a lot of heat. For its removal, you have to install air-conditioning, which consumes up to 50% of all power. As a result, equipment in data centers typically consume more kilowatts of electricity per square meter.
Power supply systems should be organized in two geographically separated transformer substations. Cable lines should route independently, each must be connected to “their” transformer. You will have to use automatic selection (ATS), exercising choice and switching between the main and backup lines.
Despite the existence of redundant power, it is desirable to use diesel generator power plant. In the scheme of power they are after you. That is, in case of a complete power outage or its non required parameters (voltage, frequency, and “purity”) is run automatically Data Encryption Standard, and the load is transferred to them.
DES, as a rule, it have a supply of fuel for up to 8 hours of continuous work and fills without stopping. Diesel Power Module can continuously work up to 3 – 4 months, subject to an established supply of fuel.
Next, install an uninterrupted power supply. It is absolutely necessary equipment, as the most dangerous power surges are short, lasting 2-3s. There are such negative factors, causing significant damage, as the excess voltage, frequency variation, grounding violation, the interphase potential, etc. UPS is operating in the mode of on-line, is an ideal Isolated, consuming “dirty” power from the city or DES, and issuing an absolutely clean power, the frequency of 50 Hz, 220 – 380 watts, without extra harmonics.
Creating such a serious electrical power supply system, data center requires a thorough approach. Design and construction perform specialized design and installation in organization, operation is carried out in accordance with the rules and regulations of its own engineering service company that owns the data center.
The task of cooling is maintained within the data center operating temperature range from 19 to 24 ° C and humidity from 40 to 80%. Typically, data centers of medium size (area 100 – 200 square meters. M) use cabinet-Precision freon air conditioners, air takes heat from the top room and force cold air under the raised floor. Calculation of power system cooling is a factor 1, ie, 1 kW power equipment is require 1 kW refrigeration. Reservation system is carried out by the scheme N +1. What does this mean? When we require refrigeration system cooling total of 100 kW and are available in cabinet driers freon air conditioners to 25 kW cold, you should install 5 cases: 4 main producing 100 kW, and a backup in case of failure of one of them.
Experience shows that this is a very effective method, especially if all five boxes are connected into a single management system. In this case, the software rotates the role of spare cabinet, which allows more efficient cooling system to expend resources in general. If the total heat generation data center equipment exceeds 150 – 200 kW, and is expected to further increase capacity step by step, it is advisable to install liquid coolant.
Use the following scheme. The street is established powerful chiller, the cooling water-based coolant to 16 ° C. Coolant circulates through the pipe by placing the data center, where a cabinet-type air-conditioners are circulating air. Heated agent returns to the chiller for cooling, the circle closes. Capacities are based chillers might be restricted to financial capacity.
19-inch cabinet, totally filled with modern server equipment, capable to provide up to 20 kW of heat. It should be remembered that due to structural and physical features, the usual way is almost impossible to remove more than 5 kW of heat, ie, blowing cold air from under the raised floor.
To resolve this problem, there are several approaches, in particular, the organization of “hot” and “cold” corridors. Under the corridor means a passage between rows of cabinets. In the “hot” corridor sent fans, blow-hot air from the servers, but from the “cold” to take the cold air blown out from under the raised floor through the lattice. This scheme allows to significantly raise the efficiency of refrigeration.
Also required to organize a flow of fresh air from outside. The fact that the air is constantly circulating through the computer cabinets and air conditioners, “fade” and requires updating. Influx through a custom install, heating and desiccating air of the street. In addition, it creates extra pressure inside the data center, which prevents penetration of dust.
For humidification steam generators are used. Dry air is not very effective for cooling systems because of the physical principles of air conditioning. With decreasing humidity of the electrostatic potential increases, which may cause equipment failure. Cooling system – a complex and delicate mechanism. As practice shows, this is the most critical and unreliable component of a complex data center. If it stop at 30 minutes, it can lead to heat rooms up to 60 – 70 ‘C, which entails the failure of equipment.
Raised floor is a necessary component of the data center. Under a pressurized cold air under it are power cables and low-power infrastructure. Typically, a raised floor is made of MDF tiles with a metal base with laminated cover, size 600 x 600 mm. Height above floor level – from 100 to 800 mm, for the most optimal data center 350 – 500 mm.
Early Detection Of Fire and Gas Fire
Maximum system efficiency gas fire should work in an earlier stage of development of fire detection, ie, when there is a corruption of heating elements or the initial inflammation, and in less than one minute extinguish the heat (and) fire. The complex is warning about fire and fire fighting should inform about the potential for ignition much earlier than would have to use extinguishing system.
This is achieved by installing a large number of high-sensitivity smoke, optical, chemical, spectral and other fire alarms tied into a single intelligent fire alarm and suppression, as well as the complex institutional arrangements. It includes the constant visual inspection of equipment, compliance with fire regulations, and rules of operation of electrical installations. Very effective is the system of super-early fire warning VESDA (Very Early Smoke Detection Aspirator), which allows you to detect a potential source of fire, long before its occurrence.
There are many types of fire extinguisher for automatic fire extinguishing systems, suitable for extinguishing fires in data centers. These include a variety of gases – halons, inert, carbon dioxide. It also use fine-dispersed water and powders. In this case, there is a principle – the more expensive solution, the more will remain in good serviceable after activation of the fire. The most expensive, but the most loyal to the equipment, are extinguishing mixtures based on Freon or inert gases.
CCTV and differentiation of physical access – the most important attributes of a modern data center.
The most effective is the system of access control based on proximity-cards. It consists of a server management, system controllers and readers, as well as maps (keys). This solution is inexpensive but effective measure of protection. In conjunction with CCTV, it is able to provide a reasonable level of security for data center.
Each door is equipped with a installed card reader, lock and a video camera. At an official request employees and customers are issued by a private key, which is the entry ticket to the territory of the physical perimeter of the data center. Typical attributes of key-picture of the owner, his personal details and the name of the company where he works. The key is always a staff member and provides easy access to the necessary facilities. Their list allow the access time prescribed in management of dedicated servers at institution account and linking it to a particular key.
Along with the system of separation of physical access, video surveillance system is a mandatory part of the organizational and technical measures of data center. It consists of an N-th number of video cameras installed so as to monitor virtually all the technical premises, entrances, exits, passages and hidden squares data center becomes easier. The video image comes to security monitors and and they are archived on digital media which creates another “virtual” level of security for data center services
Data Centers are a form of value-added service that offers resources for processing and storing data on a large scale for organizations of any size and even professionals may have at hand a structure of great power and flexibility, high security, and also qualified in terms of hardware and software to process and store information.
Currently we can define two main categories of data centers:
A. Private Data Center (PDC)
B. Internet Data Center (DC)
A PDC is owned and operated by private corporations, institutions or government agencies with the primary purpose of storing data resulting from processing operations, procedure and also in applications related to the Internet. Furthermore, an DC is usually owned and operated by a provider of telecommunications services by operators of commercial telephony or other types of providers of telecommunications services. Its main objective is to provide various types of connection services, web hosting and equipment users. Services can range from long distance communications, Internet access, content storage, etc.
Services Offered in a Data Center :
Co-location Services: The client hires the racks of physical space and infrastructure for energy and telecommunication, but the servers, systems, management, monitoring and technical support are provided by the client. This relationship can be relaxed and it is customary to establish a contract with the terms and conditions, clearly defining the scope of services of each side. Telecommunications equipment includes.
Benefits For Your Company:
• Speed of service;
• Expert advice.
Basic service is included in a colocation package of basic services for the operation of equipment at no extra cost and maintaining the standard throughout the Data Center.
The Services Offered Are:
• Proactive monitoring and notification;
• DNS server (Domain Name Server) primary and secondary education;
• Technical support 24 x 7 x 365;
• Security building;
• Service reset (on / off equipment);
• Monitoring network;
• Redundant infrastructure;
• Hall of incubation (unpacking and setting)
Hiring server colocation, customers receive a range of services. But it can also implement your purchase with options that will provide it with the most comprehensive range of services that a company can receive in colocation.
The client will have:
• Provision of access and bandwidth for Internet connection and the external network;
• Room for shared customers;
• cafe customers exclusive;
• IP addressing.
This service is dedicated to businesses that need high-quality infrastructure, connectivity between offices and / or the internet. This service is sold in square meters, cages (cages) or half rack and connections (IP, Internet, frame relay, ATM, etc.). From 64 Kbps.
Hosting offers a range of services suitable for companies wishing to leverage investment in hardware and software. Hosting service allows the customer to use the infrastructure of the data center and edge servers, and rely on highly qualified professionals who offer ongoing support to the customer.
The customer has the option of choosing equipment and software packages tailored according to the needs of business. Everything is custom designed and built to offer the best solution for each client. Thus, it guarantees the purchase of products that your company needs, allowing the client to fully dedicate the actions to focus on their core business.
The allocation of physical space in a rack and the amount available for equipment are calculated according to the defined configuration of servers and equipment hosting. All with the advantage of your company can set the bandwidth.
• Economics of investment in fixed assets;
• Servers of art;
• Constant updating of software / hardware;
• Know-how in technology;
• Speed of service;
• Reliability of services rendered;
• Installation of high standard.
Using a hosting service, optimizes customer investments in hardware and software with exclusive use of the latest generation of dedicated servers. Services that are essential for the functioning of the equipment are made available without additional cost and with high standards of our Data Center.
• Capacity planning and network server;
• IDS (Intrusion Detection);
• Proactive monitoring and notification;
• Availability of IP address;
• Issue report online;
• Server to relay e-mail;
• DNS server (Domain Name Server) primary and secondary education;
• Technical support 24 x 7 x 365,
• Help Desk;
• Security building;
• Service reset (on / off equipment);
• Ensuring maintenance of logical security of the operating system;
• Full operation of the server until the operating system level;
• Incremental backup.
• Additional space structure in SAN (Storage Area Network);
• Traffic Gbytes per additional months;
• Additional space on internal disk
• Additional memory;
• Raid 1 / 5, with the possibility of protection service to the internal hard drive through replication of data between disks;
• Accounts of additional e-mails;
One aspect that must be observed in hiring a service of Data Center, is the type of access (co-location) that the user will have from the server service provider. The type of access will be defined by which the server will be accessed if necessary.
If the co-location is hired, the access is done by employees of the provider, locally. If the co-location is remote, access is done through remote control software that will be chosen by the user. In this case the application is installed on the remote access server by staff of the service provider. Eventually one or more tools may need maintenance or there may be a need to install new applications. In such cases, the user must request the service provider to arrange whatever is necessary for the operation. While hosting the server, the user signs a term stating the legality of all software installed on your server.
One can observe that through co-location (location of a server’s unique user, installed and operated in the structure of the provider), the user can benefit from a range of resources. A co-location provides high scalability, ie in case of a need for expansion of services or equipment, it can be done immediately, monitored 24 hours a day, 7 days a week (24X7), backup, optimization of costs operation and maintenance, network with high availability and load balancing.
The choice of location for deployment of the DC should be made taking into account the region, consistent with the Code of the City zoning, land size, easy access for delivery of equipment, high areas without flooding, existence of infrastructure basic sanitation, water, telephone and electricity.
Criteria For Site Selection:
• Being close to points of presence to access networks of optical fiber enabling the connection of two different trunks.
• Availability of energy with the possibility of obtaining two power inputs
• Scalability, to allow increased building area over time.
The Main Components Of On DC Areas Are: Social Hall, And Meeting Rooms To Receive Visitors.
• Operation, maintenance and storage of equipment.
• Living room equipment including servers for hosting and co-location and telecommunications room.
• cafe equipment segments electricity and air conditioning.
• Group Moto generator and fuel tank typically located in an area outside the DC.
The Goal Of Space Planning Is:
• Have facilities with 60% of the total area devoted to the room of Data Center Equipment.
• Promote the “state of the art” premises since the operating system to the level of management of the database.
• Promote facility that reflects the image of a high-tech enterprise, business-risk high-yield investments, functionality and control.
DC is usually divided into three areas of physical security in increasing order of restriction of access:
Zone I: Public areas including the Lobby, the area for visitors and administrative areas.
Zone II: Areas of DC Operation.
Zone III: Equipment rooms, the heart of the DC, where the servers are located, the “shaft” of cables, power distribution units (PDUs), batteries and air conditioning machines.
The construction should provide a solid structure composing secure facilities that complement and protect equipment and information residing in the DC.
Electricity: The Electrical segment consists of the Uninterrupted Power System (UPS), the Emergency Power System and Power Distribution Units (PDU).
The uninterrupted power system (UPS) has the function of providing energy for all data center equipment, including safety equipment and fire detection and alarm. It consists of sets of compounds by UPS batteries, rectifiers and inverters. These UPSs, redundant, connected in parallel, ensure a continuous supply of power even in case of failure of power transformers, power input or a set of UPS.
The banks of batteries are sized to feed the loads for a period of 15 minutes. This time is sufficient for starting and connection of diesel generators in case of power failure of the Concessionaire.
The power system consists of a group emergency diesel generator which will come into operation and connect to the electrical system of the DC automatically.
Generators are rated to withstand all the loads necessary for the operation of the Equipment Data Center during a power failure of the Concessionaire. The goal is to assist the operation 24 hours x 7 days a week, considering the conditions for preventive maintenance, adding new components and replacement operation after unplanned outages.
The power distribution units (PDU) are responsible for conditioning the signal to feed multiple devices at DC.
4. Air conditioning
The segment of Air Conditioning has the function of maintaining a controlled temperature and humidity in the premises of DC. The segment includes the air conditioning system for cooling units and air handling system Distribution of Air Conditioning. It should be connected to emergency power generators.
The Cooling System to provide heating, cooling, humidification and de-humidification of the building.
The Air Treatment System must be separated into three types of area: Room Facilities: Data Center, area offices, equipment rooms Air Conditioning and Electrical. The separation is due to differences in sensible heat and latent heat of each area the conditions of temperature and humidity.
The Distribution System of Air Conditioning Equipment Room to the Data Center will use the system to supply air for the full set beneath the raised floor. This system involves inflating the raised floor at a minimum height of 60 cm., That depending on the amount of conduit, tubing, mats, etc., should have its height adjusted so as to allow air to circulate throughout the room Data Center. The goal is to operate 24 hours a day 7 days a week.
An adequate cooling is essential to maintaining performance and safety of operation of data center services.
A Data Center must ensure that the internal temperature in the areas of production varies by at most 1 ° C. For this, we have:
• Structures cooling N +1, ie for each equipment functioning, there is another reserve (ready to use);
• Modular refrigeration and air exchange;
• Scalability according to demand.
5. Fire Protection System
The Data Center is a facility for electronics essentials such as servers and other types of computers and telecommunications equipment. In addition to meeting standards of the local fire department, the fire protection system should seek to avoid damage to the equipment in case of fire.
One of the best solutions for the firefighting equipment rooms is a combination of the Combat System with Pre Action Sprinkler (with dry pipe) above the raised floor system and Fire Fighting for Gas FM 200 below the raised floor.
The combat system with gas will be connected to a sensitive detection system and be the first to be fired. The gas is spread throughout the area, leaving no residue to damage sensitive equipment or to order a cleanup cost of the equipment.
The system of pre-action when triggered triggers the discharge of water only in the sprinklers that have been operated by heat over the fire.
6. Supervision and Control System
The control and supervision system continuously monitors the various segments of DC tracking items such as:
• Control of loading and parallelism of the generating sets
• Supervision and control of medium voltage panels
• Supervision and control panels for low-voltage
• Integration with system of generators
• Integration with system rectifiers
The system consists of computers with the latest technology capable of withstanding continuous use, appropriate systems for supervision and control. The same are redundant to each other, allowing high flexibility and performance system.
The DC also has a system of closed circuit television and access control that controls the entry and exit in various rooms and areas of physical security at DC.
7. Data Center Building Standards
Important factor of a Data Center, is to implement and maintain methods of standard implementations of structured cabling seeking possible expansions, certification and ensuring safety and maximum use of the network.
Regarding the standards used, we can highlight the norms created by EIA / TIA (Electronic Industries Association / Telecommunications Industry Association) or ISO / IEC (International Standards Organization / International Electro-technical Commission called ISO / IEC 11801, equivalent to the EIA / TIA reprinted by ISO 568A).
7.1: TIA / EIA TSB 67 Standards
Transmission Performance Specifications for Field Testing of UTP cabling Cat5 (UTP end-to-end Performance Testing System) aiming system of Telecommunications (Telecommunications System Bulletin – TSB) is directed to test specifications for post-installation performance, the specifications include characteristics of field testers, test methods and minimum transmission requirements for UTP cabling systems. It cites factors affecting performance as the characteristics of cable, connecting hardware, patch cords and the cross-connect as well as total number of connections and the quality of the installation. The TIA / EIA TSB-67 refers to two test configurations:
a) Setting up the basic test of link (Basic Link test configuration): The basic test of link is used to check the performance of the cable permanently installed.
This Test Includes The Following Components:
• Up to 90m maximum horizontal cabling: cable includes a telecommunications closet (TC) to a consolidation point and the optional consolidation point to the outlet (cabinet) telecommunications. From one extreme to another on a horizontal cable connection.
• Up to 2m coord (string) to test the main tester for field connection to the site.
• Up to 2m coord test the remote connection to the remote unit of the field tester.
There Are Four Test Parameters In Each Link:
• Mapping (Map Wire) – Consists of 8 confirms the continuity of drivers end-to-end. Indicating possible pairs of short (shorts Between pairs), crossed pairs (crossed pairs), pairs Reverse (reversed pairs) and pairs spliced (split pairs).
• Length (Length) – Method of measuring the length of the cable by electrical means.
• Attenuation (Attenuation) – Method used in measuring the signal loss in the primary channel or link.
• NEXT – Measuring the amount of signal interference issue in a couple others. It is tested on all the endpoints of the link (endpoints, local and remote).
b) Channel Configuration Test
The test channel is used to verify the performance of the channel entirely. The channel has the following components:
• Up to a maximum of 90m of horizontal cable including the cable between the TC and a consolidation point (optional) and the consolidation point to the outlet (cabinet) telecommunications.
• Coord (umbilical connection machine or equipment) from the desktop.
• Cross-connections in telecommunications closets being made through or patch cable mapping coord.
• The total length coords, patch cords and cables and mapping coords area.
7.2: TIA / EIA TSB 72 Standard
Guidelines Centralized Fiber Optic Cabling (Centralized Optical Fiber Cabling). The TSB-72 was created to help in planning a cabling system fiber-to-the-desk (FTTD) from 62.5/125mm, using centralized electronics unlike the traditional method of distribution of equipment to individual floors to extend the connections from the desktop for cross-connect implementation. Use an interconnection between the horizontal and backbone cabling allows for better flexibility, ease of management and can easily migrate to the cross-connection.
But you must have the maximum length of 90m in horizontal cabling. The distance from the horizontal cabling and backbone combined with the coords of the desktop, patch cords and equipment shall not exceed more than 300m.
The centralized cabling system shall be located within the same building working areas to be served. All displacement and change in activity should be performed at main cross-connect. Horizontal link should be added and removed in the TR. This must be a project of the cabling system enabling the migration to centralized mode pull-through, interconnect or amendment to an implementation of cross-connect. As a method to facilitate this migration, there must be within the scope of the project is having enough space in the TR allowing future growth and provision of additional patch panels and appropriate slacks that allow possible shifts in the cables of cable to the local cross-connect.
This slack can be stored without cables or fiber connectors. To fill the gap it has to prevent the maximum radius for curves in the cables are not violated thereby avoiding possible damage in optical fibers and others. The slack in cables can be stored indoor or on the walls of the telecommunications room, but should use boxes to protect slack optical fibers, due to their limitations and specifications.
With respect to the backbone, they are providing future horizontal links, this minimizes the need for placement of additional backbone cables. The fiber backbone must be capable of supporting current and future networking technologies, and require two fibers for each connection on the desktop.
The standard is required to use the following rules for labeling ANSI/TIA/EIA-606 cabling system centralized ANSI/TIA/EIA-568-A to ensure proper polarity of the fiber and connector specifications and methods of implementing connectorization guidance to the desktop AB and BA direction in the central cross-connect.
7.3: TIA / EIA TSB 75 Standard
Additional Horizontal Cabling Practices for Zones (Additional Horizontal Cabling Practices for Open Offices) methodology to meet modular office environments increasing flexibility and reducing costs are discriminated as follows:
1. Horizontal Cabling reference to Open Offices (Horizontal Cabling for Open Offices). A termination point horizontal (telecommunications room multi-use) and a point of interconnection or horizontal intermediate (consolidation point) thus creating greater flexibility in open office layouts with modular furniture, where you have frequent changes. The telecommunications room multi-use (MUTO) and the consolidation point should be in a fully accessible, permanent location.
2. Cabinet Telecommunications Multi-Use (MUTO – Multi-User Telecommunications Outlet Assembly) The Cabinet telecommunications multi-use (MUTO) has the function to terminate point for horizontal cabling, consisting of multiple telecommunications closets on the same site. The modular coordinate extends from MUTO terminal equipment without additional intermediate connections. This configuration allows the moving of office plant without affecting the horizontal cabling, followed by the following criteria:
• Can not be installed on the ceiling.
• The maximum length of modular coords should be 20m.
• The modular coordinate MUTO connecting the terminal equipment must be labeled at both ends with a unique identifier.
Should be identified with the coordinate patch of greater length of the work area (modular cord). The length of this coordinate is calculated by formula:
C = (102 – H) / 1.2
W = C – 7 (The length of the cables, work areas may not exceed 20 m)
C = is the maximum combined length of cable that connects the user equipment to the telecommunications outlet, over the cable that connects the equipment in the telecommunications closet and cable that connects the patch panels;
W = is the length of the work area;
H = is the length of horizontal cable.
If you are already using fiber optic cables, you can use any footage from the horizontal cables, desktop and cabinet telecommunication, keeping in mind that the total length should not exceed more than 100m. When the optic cabling is centralized, one should follow the guidance of the TIA / EIA TSB 72.
3. Consolidation Point : It is the point of interconnection within the horizontal cabling actually performs a direct connection (straight-through) intermediate between horizontal cabling and cross-connect based on the horizontal cabling that goes to one or the MUTO telecommunications room on the desktop. There should be no cross connections between the cables and you should follow these guidelines:
• Ensure that the total distance of the channel in any way beyond a distance of 100 meters.
• Ensure the fixing of the cables without violating the specifications and characteristics of each material complies with the requirements of minimum radius of curvature.
• Ensure that the consolidation point is at least 15m away from the telecommunications room, avoiding the additional NEXT due to the resonance of the multiple connections link near the cupboard.
7.4: ANSI/TIA/EIA-568-A Standard
Norm that characterizes the minimum specifications for structured cabling, classifying the components of the installation structure as follows:
1. Ease of entry (Entrance facility): Involves the ease of entry of cables, connecting hardware, protection devices and other equipment required for the building. The equipment inside the room can be used for connections to public or private networks.
2. Main cross-connect (Main cross-connect): The Hall of telecommunications equipment may have the same location of the main cross-connect. Cabling techniques that apply to telecommunications closets (TC) also apply to equipment rooms.
3. Distribution Backbone (Backbone distribution) Interconnection between telecommunications closets, equipment rooms, and may be involved cables, cross-connects (main and intermediate) terminations, jumpers or patch coords for connections:
• There are guidelines to provide the maximum amount of life of at least 10 years to cables supported.
• Assume the distribution system of star topology and can be connected to a main cross-connect, intermediate or other main connection, but you must be careful not to have more than two tiered levels of main cross-connect.
• The recommended approach is to use one of the following features of connections among them, 100MHz UTP, STP-A DE 150MHz, Optical fiber cable 62.5/125 m, singlemode fiber optic cable or coaxial cable 50W (recognized but not recommended for new installations).
• To make the choice of cabling to be used, there must be criteria for selection of the media depending on their characteristics such as flexibility (considering the supported services), life size and the location and quantity to be used.
• The recommendation of maximum distance from the backbone is also a very important factor for the choice of cabling.
4. Horizontal cross-connect is the name that refers to telecommunications closets (Telecommunications closet functions) which is to hardware in the connection of all horizontal cabling, intermediate cross-connections or even the cable of the backbone .
The cross-connections and interconnections can be said of connections between horizontal and backbone cabling or equipment connecting integrated circuits (hardware).
5. Horizontal distribution is part of the cabling system that connects the wiring from the desktop to the horizontal cross-connect in the TR, the horizontal cabling can find outlets for telecommunications in the area of employment, terminations and patch coords and jumpers in the TR. The horizontal distribution also possesses some important factors such as:
• General Guidelines Project (General design guidelines): It has a target to meet the current specifications, thus facilitating the maintenance and replacements also considering the possibility of future equipment installations and changes in service since the horizontal cabling is the least accessible and is the subject to most of the activity of an implementation.
• Topology (Topology): The horizontal distribution system must have the standard star topology where the points of desktop wiring must be connected in a horizontal cross-connect to the telecommunications room located on the same floor of the desktop.
• Distances : The system must meet the average distance of 90m corresponding to the total length of the cable (leaving the work area to the cross-connect in the TR) is allowed where it should not be more than 10m in length for each path of coord area of work, jumping and equipment, but should not exceed 3m in length for path coords and jumpers that are used to connect equipment or the horizontal cabling backbone.
• Media recognition of horizontal distribution : You can use the 4-pair UTP 100MHz cable, STP-A 150MHz 2-pair cable or fiber optic 62.5 / 125 m (two fibers) as there are types of cabling for horizontal distribution according to the recommendation of the standards, but there are some observations:
- Coaxial cable although recognized, is not recommended for new installations.
- Hybrid cables (multiple types of media wrapped on the same cable) can be used if each type of media meet with the requirements of transmission and color specifications for this cable is also necessary to make the distinction of UTP multipair.
• Criteria for selection of media (Media selection criteria): For the desktop, it will have to be equipped with at least two telecommunications closets and we could be associated with voice and other data for the first aims must be 4-pair UTP cable 100W category 3 or greater, and for the second 4-pair UTP 100MHz, Category 5, STP-A cable 2 pairs 150MHz or two fiber optic cable, 62.5 / 125 m •.
6. Work area (Work area) are defined as components of the landscape of work, ranging from cabins or telecommunications closets using 4-pair UTP cable with up to 3m long coords. The desktop is just a reference to the standard since it is quite variable and usually are never permanent and changing.
7.5: ANSI/TIA/EIA-569-A Standard
A major network cabling standards focusing on the specifications of infrastructure, structured cabling, providing specifications and project management for all building facilities. Identifying six infrastructure components: ease of entry, equipment rooms, backbone routes, provision of telecommunications closets, horizontal routes and work areas.
1. Ease of Entry (Entrance Facility): It is defined by ease of entry into building or backbone of telecommunications services, and may contain devices interface with public networks. Necessarily the place to be dry and close to the vertical backbone routes.
2. Room Equipment (Equipment Room): room whose space is intended to centralized location of equipment common to the employees, their location and design must be considered for the possibility of an increase in equipment and in its accessibility.
3. General Design Considerations : Cafe equipment tends to be a centralized space for housing telecommunications equipment (PBX’s, servers, routers, among others) of a building, being located near the route of the backbone. Its size has a minimum of 14m ², but to suit the characteristics of specific equipment, there is a need to make to make a project allowing a non-uniform occupation of the building, providing 0.07m in the equipment room space for every 10m usable floor space. If the equipment room is being designed to move, verify that the capacity of the floor will bear the weight of equipment to be installed and check for interference, vibration, altitude, HVAC equipment (dedicated equipment room), lighting, energy and fire prevention.
4. Routes Inter-Building (Inter-Building Pathways): In a campus environment, routes, inter-building are needed among which it makes the connection of separate buildings. The lists of standard ANSI/TIA/EIA-569-A basement, ground, aerial and tunnel are the main types of routes used.
5. Routes Inter-Building Backbone Underground (Underground Inter-Building Backbone Pathways): An underground route is considered as a component of the ease of entry. For route planning, you should consider the limitations existing in the topology, ventilation to prevent accumulation of gas, vehicle traffic to determine the thickness of the layer covering the route and whether it should or should not be concrete, if groundwater constituted for conduits, it should have ducts and troughs, including manholes.
Cable Distribution System For Servers
The power cords for the servers will be installed under the raised floor and arranged in layers or channels. Fiber and coaxial cables that interconnect the data room of routers and switches to the server room have redundant facility, with a circuit walking under the raised floor and another next to racks of servers. The panels for the distribution of data cables will be distributed throughout the server room.
The design of the cabling is done according to structured cabling standards. 1:10 Distribution System Via Cable to WAN.
Fiber Optic :
The composition of the optical fiber provides conditions for the propagation of light energy through your core, it spreads light by successive reflections.
Optical fibers have some advantages over some of the traditional physical media such as coax and twisted pair. For example:
• Low transmission loss: decrease the number of repeaters.
• High transmission capacity, increases the amount of information conveyed.
• Interference immunity and electrical isolation: the data is not corrupted during transmission.
• Security sign: the fiber does not radiate significantly propagated light, giving a high degree of safety information conveyed.
The modern fiber optic bandwidth has very large (x multi-gigahertz km) with low attenuation and low dispersion of the pulses emitted. For these systems, the fiber properties are those that comes with the lowest cost per km per channel that are installed.
The Use Of Optical Fiber Also Has Some Disadvantages Such As:
• Fragility of optical fibers without encapsulation
• Difficulty in fiber-optic connections
• T-type couplers with very large losses
• Lack of standardization of optical components
The transmission capacity (bandwidth) of an optical fiber depends on its length, its geometry and profile of its refractive indices. There are two main classes of fiber: single mode and multi mode fiber has several modes of propagation and in accordance with the profile of the variation of refractive indices of the shell with respect to the core, are classified into: step index and graded-index, the difference between them can be seen in the figure below. Its diameter is quite high, between 50 and 80 microns, making the beam undergoes reflection, limiting the signal range to about 2 Km Because of this multi mode optical fibers are used in local or campus.
Already the single-mode fiber has very small dimensions, and a transmission capacity than multimode fibers, the diameter of 10 microns, allows a propagating wave without reflection. The distance is significantly higher and the bandwidth available is almost unlimited. Singlemode fibers are used, especially in long distance networks, ie networks in metropolitan type Gigabit Ethernet, or type of SDH or DWDM backbones.
There are some characteristics of transmission in optical fibers that strongly influence the performance of fibers with a transmission medium, such as DWDM. In choosing the type of optical fiber for WDM systems in operation, must analyze factors such as attenuation, dispersion and nonlinear effects, because they are essential for good system performance.
Each type of fiber has some behavior for WDM operation which will result in restrictions for this type of operation. These restrictions have a direct impact on system performance, limiting the transmission capacity or decreasing the extent of linkages.
As the reader note: It is not easy to set up Data Centers nor it is Cheap!!!
If it is the dream of your life, Go Ahead!!!
ESDS offers complete Data Center Setup Consultancy and Management Services.
The Colocation services are extremely essential for any kind of online business, be it small or big. For the smooth running of your business web servers and without needing to create a dedicated environment for these servers and other hardware and hiring expert technicians working round-the-clock, you may find the Colocation services and the Colocation services extremely useful as it gives all the support that you may need while installing, managing, and maintenance of the software and the hardware.
The Colocation services are getting more popular these days. However, once you decide that you need a Colocation server for your website, it becomes extremely necessary that you do not choose any kind of web hosting company that you come across. You will need a Colocation facility for your very important business websites.
First, you must check what the requirements of your website are. If your website is just going to have a very few web pages, and not a lot of visitors are going to check your website, then even a basic web hosting plan would do. A Colocation server is needed when you have a serious online business and needs professional quality hosting environment for your dedicated servers and also need the expert web technicians to look after it round-the-clock.
The Colocation services are mainly used by the top national and multinational companies as they have to work with a lot of data that is important as well as confidential. Such organizations require complete web security and other professional data center facilities which keep the data and the websites secure. The web hosting companies that own the professional quality data centers can help such top organizations in managing the complete web hosting server along with its hardware and other security features.
There are many great features of Colocation services. Managed Colocation Service providers such as ESDS steady power facility which has multiple power grids with top class power generators and has dedicated cooling systems. They even provide uninterruptible power supply which acts as a back-up power generator to your web servers in the event of power outages, thus preventing the loss or corruption of important data or connectivity. Adding to these great features that colocation service provider provides is the equally great feature – a complete customer and technical support system that works round-the-clock, maintaining and performance-tuning the Colocation services and other systems in full operating conditions.
The data centers which offer Colocation services usually provide the server space, web space, power and cooling facilities, security, high-quality bandwidth, and other web tools and applications that makes sure that the data is extremely safe, stable, and secure. The web host who has such great data centers provides their own infrastructure for the Colocation of their important clients’ hosting requirements. Without such continually upgraded data center facilities, the online businesses would indeed face lots of difficulties.
Colocation service providers are exclusively responsible for setting up the entire infrastructure, the software, security features, and also maintainence of the Colocation hardware. Some data centers provide Remote Hands services which on a fee basis. They will perform basic system support functions such as system reboots, tape swapping, etc.
There are many benefits of Managed Colocation services that includes managed redundant cooling that provides optimal temperature for the hardware, uninterrupted power, fire protection feature which make use of modern fire detection systems, very high speed networking, security, and other such facilities. Indeed, Managed Colocation Service providers that offer these services are the very professional state-of-the-art web systems that take the running of the online businesses to a new height of efficiency.