In this brief article we will cover the installation and configuration of Apache on CentOS and some other functions required in today’s web servers. Although there are already plenty of material about it, a simple Google search returns a number of texts already, I decided to write about it because it is something simple that sometimes escapes from memory, having published the text is easier to see later and also did not find here any similar text in Vol.
To begin, let’s put some basic descriptions, because there’s always a first-timer. CentOS is a distribution Linux Enterprise class derived from source code freely distributed by Red Hat Enterprise Linux and maintained by the CentOS Project. The version numbering is based on the number of Red Hat Enterprise Linux. For example, CentOS is based on Red Hat Enterprise Linux. The basic difference between them is the provision of support paid on acquisition of a Red Hat Enterprise Linux.
Functionally, it can be considered as clone systems. CentOS provides greater access to industry-standard software, including full compatibility with the software packages prepared specifically for systems Red Hat Enterprise Linux. This gives you the same level of safety and support via updates that other Enterprise Linux solutions, but without cost. Supports both server environments for mission critical environments and workstations and also has a Live CD version.
CentOS has numerous advantages, including: an active and growing community, a rapid development and testing of packages, an extensive network for downloads, accessible developers, multiple channel support including support in India. and commercial support through partners. Centos Apache server (Apache HTTP Server, or simply, Apache) is the most successful free web server.
It was created in 1995 by Rob McCool, then an official of the NCSA (National Center for Supercomputing Applications). In a survey conducted in December 2007, it was found that using the Apache represents 47.20% of active servers in the world. It is the core technology of the Apache Software Foundation, responsible for more than a dozen projects involving technology webcast, data processing and execution of distributed applications.
The dedicated server is compatible with the HTTP protocol. Its features are maintained through a structure of modules, including allowing the user to write their own modules – using the API software. It is available in versions for Windows OS and the various other POSIX (Unix , Linux, FreeBSD, etc.).
PHP (recursive acronym for “PHP: Hypertext Preprocessor”) is a computer programming language interpreted freely and widely used for generating dynamic content on the World Wide Web such as Wikipedia.
MySQL is a database management system (DBMS) using SQL (Structured Query Language) as an interface. It is currently one of the most popular databases, with more than 10 million installations worldwide.
LVS (Linux Virtual Server) is a set of utilities and patches for the Linux kernel that allows the creation of a single virtual server from multiple nodes, all in load balancing and high availability by eliminating the weaknesses of the infrastructure ( SPOF ) If a node falls, in fact, the service is not interrupted.
With this system the end user will connect to a service (HTTP, FTP, DNS, VoIP, etc …) as if it were hosted on a single server, when in fact there is a whole infrastructure behind the operation of that service, and management of the latter is entrusted to its LVS that deals with requests for it to route to each node and will actually process the request and return the result, all following one of many sorting algorithms implemented in LVS .
The scalability of this system is achieved through the ability to increase or decrease necessary knots, no need to interrupt services, the same thing for high availability when the system detects a malfunction of a demon or a whole, this node is temporarily removed from the nodes available in which it process the requests.
Operation And Forwarding Methods
Let’s see how LVS works in detail. LVS operates through three-layer architecture ( tier-three architecture ) :
The load balancer (obviously redundant) that receives requests from users and redistributes them to various nodes that make up the cluster, also monitors the proper functioning.
All the servers that make up the cluster (nodes) and that process actual requests that are forwarded by the load balancer (for example, can return a web page).
A shared, centralized storage so that data is uniformly available from all nodes at the same time, then without any discrepancies.
The Load Balancer Can Route Requests In Three Different Ways:
Virtual Server via NAT (VS / NAT) : With this method all requests from load balancers (both incoming and outgoing), go to the next chapter to describe in detail all the steps.
When a user accesses a web service managed by cluster, the load balancer receives a packet on the IP of the virtual cluster.
At this point the load balancer examines the destination address and port of the package, if they correspond to a virtual service in the rules of table of the virtual servers ( virtual server rule table ) is selected physical server to which it process the request according to a certain scheduling algorithm and the connection is added to the hash table of connections. At this point, the destination address and port are rewritten by the load balancer to be sent to the physical server selected above.
The request is finally processed by the physical server.
When the response packet is sent back, the load balancer rewrites the source of IP changing from one physical server cluster virtual IP.
The end user receives the result, and once the connection is terminated (or has gone out) is deleted from the respective hash table of connections.
Virtual Server via IP Tunneling (VS / TUN) : Unlike the method VS / NAT in this case, the load balancer sends the package containing the client request to the server via IP tunneling and the physical server, once processed the request, send it directly to the client without going again from the load balancer. This technique is used primarily for geographical clusters.
Virtual Server via Direct Routing (VS / DR) : This method assumes that the load balancer is physically connected with all the physical servers (such as through a network switch). The cluster’s virtual IP address will be shared by the load balancer and all the dedicated server through a network interface (aka, non-arping and loopback) configured with the cluster virtual IP and load balancer will have a standard interface to receive all requests from the outside. Upon receipt of a package, the load balancer will change the MAC address of the destination of choice for physical server data processing and once processed, the physical server will send the response directly to the client.
Now that we have analyzed the different methods of forwarding the requests, let’s see what are the different algorithms for balancing that we can use with LVS.
- Round Robin
- Weighted Round Robin
- Destination Hashing
- Source Hashing
- Weighted least-connection
- Never queue
- Locality-based least-connection
- Locality-based least-connection with replication scheduling
- Shortest expected delay
For convenience I will use the description of algorithms, then we will begin with the classic Round Robin and its variation ( Weighted Round Robin ).
This algorithm works by simply changing the physical server to send the request to each of them, gradually . So if you arrive at the required load balancer and 4 had 3 physical servers currently running in cluster, the first request would be forwarded to the first physical server, the second to second, the third to third and fourth again at the first.
The Weighted Round Robin can manage the priority of physical servers, giving a different weight , so if we bought a new server that is supposed to be the most powerful of the old and thus able to process more requests than others, we can give it a chance higher than the requests that are sent to this new server from load balancer. For example if we had three physical servers in the cluster, two of which are less powerful and a more powerful server may assign to the two less powerful than a priority of 30% each, and the more powerful server as a priority by 40%.
The scheduling algorithm least-connection to check which server has the least number of active network connections and forwards the new request to the server that has not, unfortunately, however, is efficient only if all servers have the same power.
The scheduling algorithm shortest expected delay is based on the calculation of how could the server process the request more quickly , this calculation is given by (total connections +1) / (power on the server). Again the power of the server is given the weight that has over the others.
All for now, do not miss the second part of this article that will focus on ‘high availability’.
I guess not many people ask this question because by default it is assumed that SAAS has to be accessed through the browser. And the truth is that the vast majority of SaaS applications are accessed via web and logical thinking.
But although we know that everything is not SaaS web and some are SaaS web apps, all the web applications are not and therefore there is no need to access them from a browser to retrieve. Saas reviewing concepts that relate to this issue are:
1. Access to the software without installation or investment
2. Maintenance and upgrades of application by provider
3. Access via internet, ie from anywhere in the world.
None of them means that the application must be accessed through the browser, but sometimes it is more natural.
And then, if not accessed through browser, what other forms of access does it have? Through certain runtimes that allow code or perform other applications located on servers, ie out of your PC. The most popular are: Java Web Start Java JRE, Adobe Flash Player, Adobe Air and Microsoft Silverlight.
Java Web Start is installed with the Java JRE on client server and ensures that you’re running the latest version of the application on your dedicated server. You can run it from the browser or the desktop of the client.
Adobe Air is installed on client runtime running applications located on the server. In the event of Adobe Air you can schedule when you want the application to be updated but it is a very easy task.
The most notable disadvantages are :
1. You need to install the runtime on your PC. Adobe Flash Player has it easier because 95% of all browsers already have it installed but runtime no longer updates itself.
2. The access time to the application is greater at the first time
3. Increased consumption of resources of the client PC
The advantages would be :
All cross-platform (including Silverlight is), ie, the runtime can be installed on different operating systems and applications work the same way.
Development time is usually much smaller
In my opinion, saas have a relatively short comparison in all web users or those who purchase, happens to have a commercial use, I prefer this solution for all these advantages. For the rest, this solution with Adobe Flash Player (because of its ubiquity) also may be worth even the safest, if you want to reach a large number of users, using standard browser technologies.
And you, what do you prefer?
Cloud services typically implement common features and require an agreement about the quality of service (SLA). In such cases, the business model of the organization will face difficulties related to the management of cloud services. The cloud services management as a process of developing a system is aimed at using the power and possibilities of cloud computing to solve business problems. It is possible to use the experience of its own specialists, but there are other ways to effectively implement this approach in the organization.
With the spread of cloud services to enterprises, it raises a difficult problem to control their use and performance, as well as integration with internal resources. The result is a new area of activity – agent cloud services. By definition, an activity for cloud services management and their integration. These brokers may be the current VAR-value added resellers and system integrators. They will be responsible for compliance with SLA-agreements, and are most likely to specialize in certain industries. They are especially useful for small and medium-sized businesses that are poorly versed in the intricacies of the cloud services market and have difficulties when using them. Such brokers can take advantage of fast-growing cloud market and earn good money on services that they resell or integrate.
Office of the clouds will be the main activity of the cloud brokers. Indeed, cloud services providers are offering ready to use services, but generally do not consider the specific needs of your business and that these services need to be integrated into a single solution with other services that you supply. That enterprises can leverage cloud services, someone has to expand, integrate and customize. In most cases this is not required to make changes in the functional service, but simply to supplement or to aggregate them, while ensuring quality of service.
The basic idea of cloud computing is to allow companies to receive services in terms of outsourcing. If you want to create a completely new technology base and to train employees in the integration or expansion of these services (or their management), it will require large expenditures. Nevertheless, some companies are going to manage cloud services. Such an approach would be more exception than the rule, but if your company has chosen this path, you should make sure that it will solve business problems rather than complicate them. In addition, you must check the availability of a good reason to deploy the cloud infrastructure at the plant. Potential benefits must outweigh the costs and risks.
But most companies do not implement their own cloud, and appeal to a cloud broker. As there is a lot of demand for such services, it would increase costs for companies and the amount of related work.
Eventually this could lead to the fact that the company will be tied to a particular provider and a set of services, because investing in the expansion of services or integrate them, you will not like to lose this investment, changing services or providers.
Solution is to appeal to cloud brokers, who provide mediation service or aggregation of cloud services that enable your company to get what it wanted. With relevant qualification, they play a role similar to system integrators as well as provide managing cloud services to many customers. Over time, these system integrators will be VAR-Reseller of the services that are expanding. In this case, your company will not be tied to a single cloud provider, because you will not pay providers and brokers. If you want to change, you can change the broker, to appeal directly to the provider or ask your broker to provide other enhancements.
In the future, we would hardly expect a clear division between public and private clouds. Companies are likely to use different options, including hybrid models, virtual private clouds and the set of all possible combinations. As the distribution of cloud service providers should expect the possibility of setting the most popular services.
Typically, new servers have a lot of interesting features and benefits associated with the processing of data, work with memory, network, etc. But due to budgetary constraints during the initial procurement of servers, often have to be contented with only a minimal part of all these possibilities. With the growth of business for your company it may be possible to increase their technical capacity and to upgrade servers.
If you decide to upgrade your dedicated server, how do you get the most from your investment?
When planning the upgrade of servers, it is important to understand that we are entering a time when processor performance is enough for all the complex computer tasks virtually. Typically, today, performance problems do not arise from the fact that the computational load becomes more intense and require a higher performance machine because plug-ins are required to solve problems. In other words, the problem is not CPU performance.This problem is frequently encountered both in enterprise systems and as well as in the “cloud services” platforms. Therefore, upgrading servers can be financially justified and warranted, and bring real benefits and cost savings.
Here are four areas in which the money spent on upgrading existing servers, can have a good return on investment. Expand the memory in servers for even more performance per dollar spent enable memory expansion in upgrading servers. With the growing popularity of virtualization increases memory requirements. Hypervisor can stand a lot of system memory, which had happened only during peak loads. In fact, the cost of memory expansion capabilities to the limit of the system can save money by enabling the use of virtualization technologies to consolidate and efficient use of computing resources. Increasing the size of memory increases the performance of the system allows the use of equipment that has already acquired the company, as well as it can reduce the time to handle multiple applications running on one server and allows you to extend the life of the server for two or three years.
Use a hard drive
If you happen to get a good budget crisis for the modernization of IT systems, we can spend some money on the purchase of more effective local storage systems.Heads in hard drives that spin at high revolutions per minute (e.g., SAS drives with speed of 15 200 in one minute) and drives that use the faster protocol processing or use a high-speed controllers that enhance overall system performance. In addition to improving performance, upgrade of the disk storage systems can also reduce power consumption, because modern hard disks are manufactured taking energy efficiency into account. Drives that consume little power, coupled with the controller and the operating system that supports plans for efficient use of electricity can significantly reduce power consumption.This provides a fast return on these new hard drives.
With the growing popularity of virtualization in data centers the construction of a single high-speed network, which combines Ethernet and Fiber Channel, use of converged network adapters has become popular. Installation of new adapters can be difficult, if you’re working with rackmount servers (rack server), as the motherboard or chassis do not have space to install additional PCI and PCI-Express. However, when working with medium sized servers or even mini-towers, which are arranged differently, may have a place to insert a modern Ethernet network card on which you can send Ethernet traffic, and Fiber Channel. Naturally, the need for convergence of networks and use advanced network switches or unified computing platform is required.
Do not forget the backup infrastructure
Notice the redundancy of infrastructure and duplicate elements. Think about the acquisition of RAID systems for file and mail servers, or may be more expensive solutions to ensure the expansion of disk arrays in the future and be prepared for emergencies.Also do not forget about the uninterrupted power supply. A good investment of funds will also include UPS, redundant internal power supplies, as well as the use of “hot” backup technology and the subsequent replacement of the failed system hardware.
Solutions to restore the work after emergency situations
Solutions to restore the work after emergency situations are costly and sometimes not included in the program of the initial purchase, but they are very important functions of IT infrastructure. Specialized card remote access servers, which allow you to remotely start and shutdown the computer, as well as access to the server console and perform tasks that normally require a physical presence, are expensive, but it pays off in the event of a problem in remote offices or remote data centers.
Development of a segment of virtualization as a whole is in full compliance with the extended version of the law, which describes the time variation of the public interest in new technologies and their level of actual use. However, in contrast to SOA, the use of which still lags well behind the projections of three-four years ago and the prospect is still not very clearly visible, with respect to virtualization, it can be said unequivocally: a public frenzy around the topic significantly decreases, but the scope of use of these funds is growing rapidly, that can fully describe the adage “less noise, more business.”
In recent years, issues of virtualization focused almost exclusively on the problems of server consolidation. This trend will certainly continue to evolve, both in terms of expanding the number of customers, and the depth of penetration of virtualization on IT infrastructure. This trend is seen quite clearly, and so we can fully agree with analysis of the IDC, which in the autumn of 2009, noted the passage of the Indian market funds of Virtualization on the stage of “Learning Opportunities” to the stage of the extended use of virtualization of individual servers to the creation of virtual environments at a data center.
Interests of customers quickly shifted to the construction of an integrated virtual infrastructure and management. Superficially, this is reflected in the fact that the term “hypervisor” which was a hit in the articles on the topic of virtualization. At the same time, an increasingly significant role in this market are beginning to play companies that do not deal with hypervisors, but have a very respectable position in the field of IT infrastructure management. With the accumulation of experience in the application virtualization customers have increased confidence in the technology, resulting in markedly accelerated the transfer of business-critical applications and services and start to master the dynamic model of virtual environments rather than static.
However, despite the fact that virtualization has long been recognized by leading trend of the IT platform, yet to be understood that the formation of this segment (in terms of penetration of this technology in the IT systems of customers, on the one hand, and the balance of power in the market, on the other ) is still far from complete. Although many companies have used the server virtualization (maybe even in majority) of its Level (percentage of virtual servers in the server infrastructure), variously estimated at less than 15-20%. It should be borne in mind that virtualization are first and foremost the least business-critical tasks.
Here we must pay attention to the fact that while the scope of virtualization for many years in the field of attention the world’s leading analysts, until recently, experts have explicitly avoided making traditional quantitative assessment of vendors’ positions in the best case, calling the size of the market as a whole. The situation changed only with studies in the middle of this year: Gartner first unveiled its “Magic Quadrant” market of server virtualization. Commenting on this report , we noted that even the overall assessment of the situation clearly shows the incompleteness of market formation and, moreover, casts doubt on the validity of the formulation of the question of the existence of the IT segment as a separate part of the market platform software.
It seems that quadrant shows that VMware has created a gap with a reserve of strength that talk about serious competition which is not necessary. Yet the fight for the championship still to come, and VMware will be the main competitors of Microsoft and Oracle. And not just because it is a major platform vendors. The fact that they offer different (different from VMware) virtualization strategy and try to play on the field against the rules, but on their own.
Recall that VMware believes virtualization independent segment, infrastructure software, which should push the traditional OS in the background, or even completely removed as unnecessary. Microsoft holds the opposite opinion, considering the only virtualization as part of the OS. Well, Oracle is betting on virtualization software is not at all, and its software in the first place. In the paramount applications (of course, their own), and virtualization is regarded as a means to support them.
Here it is useful to recall that virtualization is applied to IT. After all, IT was originally built on the principle of virtualization of computing processes, and it turns out that virtualization technology – it is something like “butter oil”. However, this paradox can be resolved if we take into account here such a formulation, found a few years ago in one of encyclopedias: Virtualization in IT – this means that it extend the capabilities of traditional IT architecture. ” This implies an unexpected conclusion: as soon as these funds become part of the traditional IT architecture, they no longer belong to the category of virtualization.
As it is known, the use of virtualization to the x86 architecture computers began in the late 1990′s with a PC. Dedicated Server technology appeared after two or three years, and quickly took the lead in terms of demand for market, relegating the problems of the PC in the background. However, in the second half of 2009, became noticeable growth of interest in virtualization personal computers. You can find a number of reasons for this trend. The first is, of course, have to say about a number of objective points, such as raising the overall level of confidence in virtualization, coupled with the desire of customers to reduce operating costs and ensure the work of the expanding range of mobile users. But in addition, be sure to note such factors as the beginning of mass transfer of enterprises on Windows 7, which requires to solve the problem of support for this OS legacy applications.
It should be noted that virtualization PC is very difficult technical direction, and in no way be regarded as a “simplified” version of server technologies. Moreover, in many aspects of virtualization, personal systems are much more complex than with servers. And it can be seen even by the fact that the issue of desktop virtualization involves several different (often overlapping) organizational and technical areas.
But to understand all this variety, there are two main types: client and server.
First in general terms, all computations are performed on the PC (including a fully autonomous), in the second case – on the server and workstation (or even more accurately – user terminal device) performs only functions of the user interface. The classic way to implement the first option – these are the client virtual machine, which begins with the x86-virtualization (a pioneer in this direction VMware WorkStation represented now at version 7.1.4). The second option – the architecture of Virtual Desktop Infrastructure (VDI), which has long been in the focus of IT community.
We will not delve into other virtualization solutions for the PC, just note that among them, there are many different approaches and combinations (eg, application virtualization, management of client virtual machines from a server).
The fundamental difference here is that the client virtualization is aimed at “correcting” deficiencies desktop OS, primarily to support legacy applications, reliable operation of the applications, while the goal of VDI – reducing IT costs and support for mobile corporate employees. From this it is clear that the client virtualization is still more to do with solving problems of tactical nature, while the VDI can be classified as strategic.
The term VDI appeared in the IT market a few years ago, and many vendors have already stated that they have a VDI-making. However, until recently, the question of the prospects of this area remained dug: leading analysts talked about the practical absence of demand from customers. However, in 2009, it seems, there was quite a decisive change for the market to VDI: the process of implementing this architecture, “went”, and in the world, and India. Among the reasons for this include improving themselves VDI-making, increase capacity and reliability of the Internet, as well as the need for companies to reduce operating expenses (generally considered to be almost proven that VDI does not save the capital costs).
A clear reflection of prospects VDI has, in particular, a significant correction of Microsoft’s position in this matter. If the corporation had always emphasized its skeptical attitude toward the technology (although it had in its arsenal of such funds), then in March of 2010, it declared the intention to significantly increase its activity towards VDI. It is noteworthy that for a successful fight against VMware in this area Microsoft collaborated with his longtime strategic partner, Citrix.
It is significant that it is the VDI issues were the focus of the past, the conferences of all three major players – VMware (it held a special event just for that category in the submission of a number of projects implemented in India), Microsoft and Citrix. We also note the increased number of publications in the media about the experience of the VDI.
From virtualization to the clouds
One of the most notable differences from earlier last year in India – the beginning and increasingly moving from a pure virtualization subject to cloud Affairs (Cloud Services). In fact, even the first representatives of the vendors avoid the word “clouds” in their presentations, and local news, explaining it very simply: “No need to scare and confuse the customer, let them first become accustomed to virtualization.” All of this was evident by publications in professional mass media: the clouds in the articles and news were already present continuously, but they were mostly stories that were not about our local affairs, but the foreign ones. Outwardly, it looked even quite funny: the impression that the same vendor (eg, VMware or Microsoft) at home and at us – they are two different companies …
This year the clouds began to talk about (in our country at home, they talked about this for a long time) and all other providers of virtualization. As usual, the most audible voice was Microsoft, which presented its vision for the transition to the cloud. Such migration should be carried out in three major steps: from the traditional data center to a public cloud, and in this scheme need to pay attention to a very important stage of transition from a virtualized data center to a private clouds (some other experts equate these concepts ), which consists in a fundamentally important point – the use of the service model in the relationship between IT and business.
In this case, we made a number of tips on preparing for the introduction of cloud virtualization systems:
Just three years ago, one of my friend came in incredibly irritating mood, when he went through a new phase of technology development. “But what, really, is this cloud computing? – He said at the time – I have no idea what all are talking about. This is nonsense!”.
Sorry, Friend. Today, even you must admit that cloud computing does not only exist, but have become almost the main topic in the field of information technology.
Ultimately it is about to do the work of organizations more efficient and profitable.
Below are four recommendations for IT managers.
1. Decisions regarding the cloud services should be guided by the needs of business and not for technical reasons.
The benefits should be a key factor in determining the application of the cloud. Cloud is increasing pace of development of the organization, helping people work smarter and collaborate more effectively. But IT budgets are usually planned at least a year, which does not resolve issues as quickly as required by the cloud model.
How did I do? Include the cost of cloud computing in the monthly operating expenses. This will provide the flexibility to use all the Cloud tools necessary for new business initiatives.
2. Out the best in the cloud to your opponents endorsed the idea of additional expenses.
U.S. Department of Defense has demonstrated flexibility of this technology, when applied cloud in the aftermath of the earthquake in 2010 in Haiti. There has been no basic communications networks. The military used what they called a computing environment with fast access (Rapid Access Computing Environment, RACE), as a platform for exchange of information to serve the rescuers in this impoverished country. Rescuers have resorted to this tool for collaboration. They came out in social networks, looking for worldwide local translators with knowledge of Creole and consultants who helped them solve problems. In the traditional computing environment, this would require much more time. Of course, not all companies decide matters of life and death, as it was in Haiti. But they need a fast and qualified answers to their questions, which often depends on the success or failure of the event.
3. Good e governance services provides the necessary foresight. But should not replace the cloud.
Mobile employee who establishes a customer or another product, and has made presentations to them and perform other tasks on the spot, cannot wait until the IT department will find and acquire the necessary tools to them. In the era of Web 2.0 such employee can find everything he need, and buy cloud version. Of course, in this case there are problems in management and some tough questions. For example, how can we allow the provision of service, if you do not have a clue how many people will use them and how they access it? Or this: who can guarantee the safety and reliability of these instruments? But all this does not prevent the conclusion of contracts. IT managers should work with vendors, seeking to cloud resources were effectively deployed in the enterprise. Through policy management, user training is conducted, in which they can, say, to ensure their own safety. After all, IT architecture, in which all functions are carried out centrally from within the company, rooted in the past.
4. Be prepared to adjust its interpretation of “measured success”.
Ultimately, as a cloud services to be primarily a tool of business and only secondarily – a technical resource, you cannot fully express its value to quantify, as we did in traditional IT environments. Instead of analyzing, for example, statistics on its use, you will need to develop new indicators. Something like: “creating new business opportunities.”
It seems more or less accepted that SAAS is a part of cloud. There are numerous references throughout the web that is defending this position and even in a lot of posts on cloud can see the software as a service as part of cloud services.
But is it really SaaS Cloud ?
Recall that the differential factor of cloud is one that allows access to the resource / s hardware and software almost immediately and get rid of them in the same way. It is precisely this characteristic that distinguishes it from usual hosting and ASP .
High above, technically to be able to get the differential factor, the cloud relies on the multitenancy as hardware and software architecture for allocating the resource and the concept of scalability to meet increasing demand or decreasing the customer. And all this quickly.
From the customer point of view who is using a SAAS , is it an accessible resource away? That is, what is saas cloud? Yes Most saas have the possibility to create your room to use the online application, whether or freemium model with free access for 30 days or pay, and therefore meets the definition of cloud fast access to software and possibility to upgrade or remove users immediately (scalability).
Now from the standpoint of the provider, how many saas have mechanisms to “ensure” immediate access to the software? Do you have technology to offer? I’m afraid that most do not have it. Have multitenancy but not technology that ensures continuity of service (fast escalation, maintenance of current users, etc..) before a flood of new users who want to use the application.
And then, is the saas cloud computing or not? Being purists for saas should or have technology in their systems to meet demand given or at least rely on a service paas or IaaS that can provide coverage to the problem. But few of “saas have this problem” For fingers counted. The vast majority have control of growth (decline) of users more or less predictable and gives them time to size their systems to cover this problem. Some talk about it, if you need quick access to cloud computing resource is what you need, and this also applies to the SAAS .
And now back to the beginning of the text changes the word saas by paas and little else, and the reasoning is equally valid for the country. Although a paas without a rapid escalation mechanisms is much more difficult to find.
For several days I was thinking to write a post about the differences between ASP and Saas and yesterday I decided to do because of an email I received from one of the readers of this humble blog. In the mail, there was a series of questions using the acronym asp and other using the acronym SaaS, and I was unable to identify whether it was a matter of right or use of the acronym that were really confusing.
If you seek information from the term ASP and Saas and even if “differences between ASP and Saas” many entries appear that attempt to explain the terms but most of the comparisons confuse the term ASP hosting and from there the comparison does not fit with Saas. I would like to clarify first “what is ASP and What is Hosting?” based on these definitions:
ASP is a paid platform. Within its single fee it include licensing, dedicated hosting, maintenance, etc.
In regime, Hosting pay licenses and / or project and servers can host it on your property or perhaps the provider.
I think it is clear that pay-per-use ASP Hosting and pay licenses to use products and machines can be yours or rented but are in provider’s premises. Clarifying these concepts, I will try to clarify the differences between ASP and SaaS.
ASP stands for Application Service Provider and the Wikipedia explains in its first paragraph that it provides software services.
Among the factors that characterize a PSA highlights the widespread use of Internet , the ability to accelerate the deployment and implementation of applications and portability of services and operations to third parties. The main barrier to a PSA lies in convincing their customers that their information with a third remains secure. On the other hand, own and operate the software and hardware environment and rented to customers to use computer applications.
Let us now turn to the definition that makes SaaS wiki:
“Software as a Service (SaaS) is a distribution model of software where the IT company provides maintenance, daily operations, and support software used by the client. In other words, it is to have the information, processing, inputs and outcomes of the business logic of the software. In simple words: The customer has the system hosted in the IT company. It is software accessed via Internet . It is not necessarily to operate through Web browsers, business logic resides in the central city of provider.
And the truth is written in different words but there are very few differences:
Applications will not necessarily be delivered through web browsers and therefore at times be necessary to install software on the client and not others.
And then ” What are the differences between ASP and SaaS?”. For a though, it does not seem if there are differences:
ASP is a proprietary software from other ISVs . In the SaaS model there are those ISVs (software developers) that offer hosting and software in one package.
Many of the applications running in the ASP are not prepared to provide access via the Internet. I’ve seen agreements of HP, SAP, etc. with ASP to offer internet through the same applications that were designed to run in-house.
These same applications were not designed to serve multiple clients from different companies, moreover, is running an instance for each client of the ASP. Most applications as a service (SaaS) are designed to deliver the application to multiple clients through a single instance (multitenancy).
Relating to the above, to provide coverage instance to several clients at once it is necessary that the application has a high level customization for each client.
Although we have seen that not necessarily the applications offered as a service (SaaS) are consumed through a browser and therefore do not require installation on the client, in fact most of them are consumed by the browser. In fact I know that Saas is not the case. The applications running on ASP may or may not run through the browser and therefore require additional installation on the client (a Windows emulator or unix, remote desktop, terminal server , citrix).
Related to the above, ASP can offer different applications and different types depending on the agreements reached with the companies that own software. This however is more complicated to get into the SaaS model, normally ISV offers a single software but we also have example like google apps.
Finally, something more than evident is that SaaS can enjoy direct support, more personalized, and without intermediaries who can pass the buck to a software problem.
I hope the post has been cleared to add more questions and that in any case in controversy, sufficient to get us to clarify the terms.
There are many definitions of public clouds, but in essence we are talking about approach, in which IT is not placed on your site, and users connect through a network of infrastructure that is not yours. We can distinguish cloud services in three models.
“Software as a service” (SaaS): This model is as old as the networking technology. Application is hosted in a third-party data center, users can connect to it and pay for its use as a public service – in proportion to consumption. Customers do not own any licenses to any equipment. They are connected to the application through a public or private network. SaaS makes sense for applications such as payroll, email or sales force automation. But this is not a complete list of applications needed to conduct business. If you like public clouds and if you can find the right solution, use SaaS. But we should not think that this is a complete strategy that satisfies your needs for IT. Very few companies will find a complete SaaS strategy to use IT.
“Platform as a Service” (PaaS): PaaS model means that servers, storage and IDE for a specific client applications reside with a particular vendor. For example, it may be owned by e-commerce platform. Customers write their applications and place them in the provider network, paying for megabytes and CPU. Various models of PaaS are most suitable for specific niche applications. In addition, they can be used to develop and test new software. However, this model is not sustainable and scalable solutions for strategic and other critical corporate applications. In addition, the promised supporters of this model reduce costs in the long term is very doubtful.
“Infrastructure as a service” (IaaS): This is a virtualized infrastructure without the physical, sometimes even without an operating system. You connect to it and do everything that you need. In essence, IaaS is a strategy for placement on another site, where IT assets are virtualized and made available to you according to your needs. This model is close reached to use other people’s insanity grounds in the 1990′s and early 2000′s. What is new is the extraction service providers economic advantages of virtualization technology. As it was the case with SaaS and PaaS, there are several very effective ways of using this model. However, it is in any case is not a panacea, does not guarantee cost savings or simplify the management needs of corporate IT.
Security: To share information in the public cloud is all the same like sending children to kindergarten without teachers, but still do not know where it is located. Parents will never do this. Likewise, the enterprises should not be placed in the public cloud environments for a 10 percent savings in operating costs. The risks are too great, and the economic effect is too small. This does not mean that the cloud providers do not take measures to protect your data. However, you as a customer must realize that whenever you no longer control the data, the security is at serious risk.
Economic aspect: Using public clouds, the company simply shift money from one pocket to another. Whatever is promised, service providers face the same problems of scalability, service quality and efficiency that arise within the enterprise. In addition, service providers are known for their inaccuracy in billing. This inaccuracy comes to the fact that any company whose sole task is to search for companies with which service providers charge excessive fees.
My advice: Enjoy the benefits of public clouds, but create private clouds on the basis of its own infrastructure in the media that have a full control.