In times when even small companies have produced a large volume of data, a new focus on enterprise storage systems: where and how to provide data on safety, without spending lots of money and without having to maintain a huge data center ?
Who relies on storing information to see your business evolve knows the importance of a high-performance database which is prepared for disaster recovery in the center of critical operations of the company. After all, the intelligent management of ever-increasing amounts of information has proven to be increasingly important for IT managers.
One of the most popular storage architectures is the Storage Area Network (SAN). In short, it shares the data on servers that are on a network data storage. But there is one major impediment: the cost.
Much has been said about virtualization storage and has become fashionable, and for many, is the great “savior.” However, in the midst of all the guesses on the subject, it is common to see entrepreneurs out there who feel somewhat lost amid the concepts and proposals that are often complicated and time-consuming and perhaps provide unreliable solutions.
Join the possibility of storage virtualization and a reduced cost is something that has been highly sought. Virtualized environment has total control, and you can add or reduce resources according to demand. And several companies have already been involved in providing solutions such as HP, which has the P4000, among others.
The P4000 is a storage solution with network architecture based on Grid Storage, that is, the more equipment is acquired, the greater the performance and availability of the environment. Uses existing infrastructure on the client, avoid costly networking. It is ideal for environments with virtualization of servers and customers seeking the first SAN storage.
In short: A great idea. Because it’s possible to start small and grow as needed, making upgrades to increase capacity – you can get, for example, with 2.4TB and nodes (modules) of storage may be included on demand and increase effective immediately.
In addition, it supports many operating systems:
Windows Dedicated Server 2003 and 2008 (Hyper-V as well), Red Hat, Fedora, HP-UX (obvious), VMware, XenServer, and X Leopard OS.
So instead of worrying about the latent need storage of your company and how much it will cost, how it is done and what size job you have from there, you focus on managing data and enjoy the consequences with a single change in its strategy: reduction of administrative costs, increase employee productivity and access to accurate information in real time.
The Windows Server 2003 R2 is a significant progress in terms of identity and access management, the other active-site supervision of servers, storage installation and management, as well as the application development field.
Microsoft Windows Server 2003 R2 makes it easier and more cost-extended range of connectivity options within the organization and outside the identities, locations, data and applications. The Windows Server 2003 R2 Windows Server 2003 Service Pack 1 (SP1) version is built on the proven code that improves stability and security features, but extends to new areas of connectivity options and a range of surveillance. The Windows Server 2003 R2 includes Windows Server 2003 SP1, all the advantages, as well as other significant improvements to the site operating, identity and access management, storage and supervision of installation and applications in both traditional boundaries within the organization and beyond.
The Windows Server 2003 R2 can be introduced easily and predictably
The Windows Server 2003 R2 Windows Server 2003 SP1 is a version of reliable program. The joint program code that is based on the two issues, facilitates the testing organizations. In addition, speeding up the introduction to encourage safety which increases the efficiency of IT departments and organizations, as well as time and cost savings.
Easier to administer the site
The Windows Server 2003 R2 delivers the vision, and provides the enabling technologies, which are easier to integrate with other location servers operating in a larger, enterprise-wide IT ecosystem. The Windows Server 2003 R2 release sites are maintained by other servers, operating performance, availability and productivity advantages, while avoiding problems in general, such as limited connectivity options and the associated overhead surveillance.
Simplify Identity and Access Management
The Active Directory ® Federation Services (ADFS), a feature in Windows Server 2003 R2, which is designed to assist administrators in the identity management problems that make it safer for users sharing of cross-border security. ADFS extends the Active Directory to the scope of the benefits of cooperation with partners, which increases user productivity, increases IT efficiency, and strengthens the system of protection. The Internet-accessible Web-based environments can extend the Windows Server powers, thus providing stronger protection for native authentication and delegated oversight gives way to a closer integration with the Microsoft authentication technologies. ADFS will be placed on the first element of Microsoft’s next generation of Web services (WS-*) architecture-based information security infrastructure.
The R2 from Windows Server 2003 Active Directory application mode (ADAM) is an enhanced version, as well as includes the UNIX identity management features. These include the “Server for Network Information Services”, which is conducive to the Windows ® and UNIX-based Network Information Service (NIS), the integration of the Password Synchronization, which contributes to the integration of Windows and UNIX servers, makes it easy to secure continuous maintenance of passwords.
Efficient storage management
The Windows Server 2003 R2 includes new tools that help to show the central storage, storage solutions, a simpler design, installation and maintenance, and improved monitoring and reporting capabilities. These benefits administrators for better management of various elements of IT storage resources, and optimize the available disk space utilization resources. The two major storage management features, including:
File Server Resource Manager (FSRM) – by generating storage reports, applying map and volume, as well as files stored on the server by filtering enables administrators to better oversee and monitor storage usage. The FSRM helps you to plan and optimize the storage functions of the quotas, and scheduled reports for storage.
Storage Manager for SANs – allows the customer to set up a storage area network (SAN) within the storage subsystems. The Storage Manager for SANs in Microsoft technology, Virtual Disk Service (VDS) based on the opportunity for the optical fiber (FC) and iSCSI (Internet SCSI) storage subsystems into service. The switching function (switch) and fault-tolerance.
The Windows Dedicated Server 2003 R2 Key Features and Benefits
The Windows Server 2003 R2 enhances productivity, improves safety, operations and more effective way to provide the following services.
INCREASING THE EFFICIENCY OF IT
The other servers on-site supervision of working:
The central management tools enable users to easily manage the remote site running file and print infrastructure. The provision of faster data replication options to facilitate the smooth flow of business.
Identity and access management:
To increase the value offered by Active Directory, the following approaches to the organization and the borders safe access platforms provides ways:
Users can increase the efficiency. The extranet can also use Web single sign-on function, and by bringing together the identities of the users password there is less need to use both internal and partner-operated Web access.
IT efficiency: applications access is centrally managed, reduces the need for a new password given to user management rights to be transferred, reliable partners.
It is self-evident, based on demand for Web access management, extranet applications need to access a central authority.
Enhanced security: Users can automatically disable Active Directory Account “locked” in the extranet access.
A stricter adherence to the laws: When users are outside the range of safety, applications run by the partners are achieved.
Better Interoperability of heterogeneous systems: The specifications for interoperability based on Web services, a comprehensive multi-platform Web SSO and identity in Windows and Network Information Service (NIS) that uses the UNIX systems, user accounts management and dynamic updating of facilities, including the automated password synchronization between Windows and UNIX operating systems.
Detailed reports using information obtained from the storage containers have been used.
Observe and control the space usage:
The files may be limited by filtering the type of files stored on their servers.
Without any additional third-party tool to easily configure and deploy Storage Area Networks (SAN).
File sharing is more heterogeneous operating system globally.
Across multiple platforms, consolidated supervision and monitoring.
Administrators and developers can further use UNIX knowledge.
The UNIX / Linux utilities for downloading, and porting.
THE BEST QUALITY WINDOWS SERVER
Greater security. The Service Pack 1 dramatically reduces the attack surface of Windows Server 2003. In addition to providing solutions to known security holes, creating the conditions that the board can be a preventive measure against future security threats. The SP 1 is the safety, the role-based paradigm to introduce, so no need to operate more services to run. I eliminated those points where hackers and malicious programs may put their feet. The role-based security features in addition to the installation of future upgrades shortens the time for which IT needs to spend time in the treatment of newly discovered vulnerabilities.
Greater reliability: The reliability of the information systems security is based on the external system and at risk of assault, it is defined as not reliable because no one can count on it with confidence. The consequences of attacks and resources spent on difficult to use security features reduce the company’s ability to perform well on the core business. In SP1 the security and productivity, provides a solution for this intersection. SP1 confronts the security threats and thus problems do not arise after the attack, the demand for “Sweep”. SP1 simplifies and streamlines the management of updates, thus reducing the amount of effort required for security fronts, so the more useful resources are focused on the core business activities.
.NET Framework 2.0:
The Windows Server extends the value of the Internet related to Web environments:
The Active Directory Federation Services can provide stronger protection for the authentication and single sign-on services are available for extranet applications.
The system requirements for Windows Server 2003 R2. Windows Server 2003 is essentially the same as the overall system requirements can be found here:
The present system emissions are based on pre-programming. The final system requirements may vary.
As already mentioned in the previous post, the basic and mandatory requirement for the use of virtualization technology with Windows Server 2008 Hyper-V is a processor, which must have a 64-bit architecture and the need – hardware support for DEP and virtualization (Intel VT or AMD-V). Other requirements are highly dependent on the tasks that are planned to perform. Processing power, memory and disk space should be selected on the basis of the necessary facilities to run the required virtual machines for all applications. Sizing the virtual environment – also very interesting and a great theme, and maybe soon I’ll write about it.
Here are the ways to increase the reliability of components, e.g., hard disk drives used in servers have a much longer MTBF than those used in home computers. The another way – redundancy: all, or a particularly critical components are duplicated, for example – hard drives work in “mirror mode” (RAID1), and in case of failure of one hard drive – the server continues to work on the second disc, and realizes that it is only a system administrator but not users of the system. It should be noted that these two pathways are not mutually exclusive, and vice versa – complementary. It is clear that any increase resiliency automatically leads to higher prices for the whole system, so it is important to find a middle ground. First and foremost, you must evaluate, to some damage in monetary terms that can lead to system failure, and increase resiliency in proportion to that amount. For example, the failure of the hard drive on home computer, which stores only some photos and a bunch of different “information” will not lead to great disaster, the maximum is to pay for a new hard drive. I would rather save important information, such as another hard drive or DVD-RW.
In addition to individual server components – hard drives, memory modules, etc. – Can be backed up with the whole server. In this case two or more servers are operating in a group, and the user is presented as a single server that handles some custom applications, and responding. General information about the cluster configuration is stored on some kind of shared disk resource that is referred to as a quorum. Cluster require continue access to this resource for all the cluster nodes. As the quorum resource can be used by data storage system with interfaces iSCSI, SAS or FibreChannel.
In case of failure of one of the servers (they are called “cluster nodes”), custom applications are automatically restarted on the functioning nodes and the application either does not terminate or terminates in a fairly short time to just not entailed large losses. The process of moving applications from the failed node to a workable called Failover.
In order to timely identify bad sites – all nodes in the cluster periodically exchange information among themselves under the name “heartbeat”. If one node does not send heartbeat – this means that the failure occurred, and the process starts with Failover.
Process Failover – transport service from the failed node
In some cases, depending on the settings in the recovery efficiency Faulting application node can be moved back at him – a process called Failback:
Process Failback – Transfer service after disaster recovery site
Requirements for creating a failover cluster in Windows Server 2008
So, what do we need to create a cluster in Windows Server 2008? First, we need a shared disk resource that will be used as a quorum, as well as for data storage. It can be any data storage system (SAN) protocols are supported iSCSI, SAS or FibreChannel. Of course, all the nodes in the cluster must have the appropriate adapters to connect the storage system. All servers that operate as nodes in the cluster must have completely identical hardware (this is ideal), if not, then at least – one processor manufacturer. It is also very desirable that the operation of the cluster is desirable that all nodes in the cluster communicate with each other more than one network interface. It will be used as an additional channel for the exchange of heartbeat, and in some cases – and not just for this purpose (for example, when using Live Migration). If we’re going to use virtualization – all units must also meet the system requirements for Hyper-V (in particular – the processors).
Complex solution: virtualization + failover cluster
Thus, as already mentioned – a solution based on virtualization can be deployed on the platform of a failover cluster. What do we give it? It’s simple: we can use all the advantages of virtualization, while getting rid of the main drawbacks – single point of failure in the form of the hardware server. In case one of the servers, or any planned outages – replacement of iron, installing OS updates to reboot, etc. – Virtual machines can be moved to a workable site quickly enough, or even invisible to the user. Thus, the recovery time from a system failure will be measured by minutes, and scheduled shutdowns of servers, users will not notice at all. The disadvantage here is one: more expensive systems. First, it will probably need to buy storage that is certain, and sometimes a lot of money. Secondly – at least one other server. Thirdly, to work in a cluster will need a more expensive version of the OS – Enterprise or Data center Edition. In principle, this is compensated by free right to run a certain number of guest operating systems (up to 4 on the server – in Enterprise and without restrictions on the cloud web server hosting – in Data center), or else you can use a free product – Microsoft Hyper-V Server R2, in the R2 release, which support clusters.
Ways to move virtual machines between nodes in a cluster
So, let’s say we have a cluster running virtual machines. Recently users started complaining that he was not enough system performance. Performance analysis has shown that applications do not have enough RAM, and sometimes – CPU power. It was decided to add the server several modules of RAM, and an additional processor. Once the processor and memory modules come from the vendor – the problem arose, in fact, produce a replacement. As we know, this requires the server off for a while. Nevertheless, users need to work, and simple, even 10 minutes is fraught with losses. Fortunately, we have previously raised the cluster, and therefore remain after the work we do not need, but you just have to move running virtual machines to another server.
How can this be done? There are three ways:
1) Move - simply moving virtual machines from one host to another. Itself a virtual machine at this translated into a state of Offline (through the completion of the work or the preservation of state), and then – run on another node. Least a simple way, but the most lengthy and “sensitive” to the user. Before moving to notify all users so they can save their data and exit the application.
2) Quick Migration – the contents of RAM is stored entirely on disk, and then on the target host is running a virtual machine with the restoration of the contents of memory from disk.
3) Live Migration – one of the most exciting new technologies of Windows Server 2008 R2. Live Migration, is a direct copy of the contents to a memory of virtual machine on the network from one host to another, bypassing discs. The process is somewhat similar to the creation of shadow copies of files open (VSS). The whole process restart in a second, less than the timeout of TCP-connection, so users do not notice anything at all. Thus, all scheduled maintenance work requiring shutting down the system can be carried out in normal working hours, without distracting users from their work.
It is also necessary to note that any of these methods, and Live Migration including – are regular features of Windows Server 2008 R2, and does not require the use of buying any additional software and licenses , each set running in a virtual machine also requires a license .
Server virtualization allows a single server, where before you had to use ten. Of course, this will allow a good save on virtually everything: on the hardware, software and licenses, and on overheads. Nevertheless, when using virtualization drops sharply overall system reliability. In this article we learned how to improve the reliability of the system through the use of failover clusters. The following article is entirely devoted to one of the “highlights” of Windows Server 2008 R2 – Live Migration.