As if we think about hosting, especially when it comes to heavy duty and demanding scale systems, there are just a handful of options. One of the solutions, alternative conventional clusters or simply multi-server solutions, is the use of Elastic Clouds – in essence, virtual cloud computing, which is executed on a distributed cluster, and provides the resources inside the processor and storage, as well as input / output, while Scaling your system that can be virtually unlimited and silent. This is the meaning of the word in the definition of Elastic – we can flexibly scale the system based on our needs at any time, it does not depend on the specific hardware or other resources.
On the hosting market, cloud hosting is now the major player. In India there are already several reviews, introductions to the topic, so I’ll just give link to the Advantages Of Cloud Computing. It provides full virtualization to perform custom tasks, so you are deploying in the cloud . This approach is very flexible and does not restrict users to apply a set of applications and libraries, allowing them to focus on what to do and not to fight with the restrictions.
However. Do you think Amazon just eat? No, it turned out that the decision allow themselves to deploy for not only to the level of virtualization, but also to build their own fully open Elastic Cloud platform.
The project is Elastic Cloud platform – Elastic Utility Computing Architecture Link Your Programs To Useful Systems that provides an open platform implementation of the clouds, which, incidentally, is compatible with the format of the images for Amazon EC2.
Technologically it runs on top of the cluster that is running on Rock Cluster (it is required to install the system in future and it will not require additional tools), and demands for the deployment of virtualization platforms Xen 3.0.2. Now the system is distributed in the form of two images for deployment via the Rock on all cluster nodes (I wonder whether it is possible to deploy the system on a single physical server, for example, two virtual machines, VirtualBox, for testing).
To manage our cloud with available web-based interface in which we can grant permissions to other users and will place their virtual machines for execution in the clouds. Incidentally, we mentioned that the images are compatible with those used in the Amazon-e, and to create them by using the same tools, and such a move seems most reasonable.
EUCALYPTUS say two things, they save money and their time to create their own image format and support it, save you time, they do not need to create different images of the same, and immediately got all the infrastructure for creating images that Amazon had prepared for himself. And to test such solution, it is now much easier, though, of course, at this stage not all functionality is supported.
While the source code of the project are unavailable, although it is fully opened (under the BSD-license) and is declared as open source. In particular, the documentation says that the deployment of such system from source code is now very complex and “wrong” thing.
Despite the youth of the project, the entire functional system already represents and research interest, and even a commercial – it is realistic to deploy on a small cluster and provide cloud hosting services for test projects, such as in the process of developing test systems that are in production then will be deployed on services.
Cloud computing is the buzzword of the moment, but what does it mean? This is really ready for prime time, or you want to stay with VPS?
First, some definitions:
Head in the Clouds?
In a sense, cloud computing, is a grid computing that borrow the overall computing resources for application processing rather than local servers or user devices. This is an updated model of a supercomputer for the IT company of the people, promising trillions of calculations per second for financial services, personal data and a huge, immerse gaming network.
Cloud computing brings together a group of servers using (mostly) low-cost, consumer-level components of a PC on the network, the distribution of data crunching cleaning. Virtualization techniques can maximize Cloud computing power. Standards for the PC and network connectivity, and software do all the work. Now, cloud computing has become the hot thing, and its potential for accessing and sharing of computing power as a virtual resource, security and a scalable way, makes it attractive for large corporate data centers. It’s all pretty vague and uncertain, though, and early adopters need courage, as well as expanding budgets.
VPS Hosting, Tried and Trusted
VPS means virtual private servers, which is created on a special physical server in a data center. VPS essentially stand-alone server hardware sharing a single physical server, something like a dedicated server with its own memory, disk space, as well as the IP address. Since you can manually restart the VPS and operate as a separate server that can run your own applications. It is similar to dedicated servers, but closer to the cost of hosting plan, VPS servers can process requests for secondary business sites easily.
Each VPS account on the server, have its own partition, which allows them access to their own roots and bandwidth. This improves performance and allows users to run their own applications. VPS works like a dedicated server, in many respects, except when it comes to paying for them, since they are much cheaper in comparison.
Cloud computing is definitely the hot thing and there are couple of cloud hosting providers in India. Nevertheless, many experts believe that the term is a multitasking buzzword used to describe the confusing array of different technologies. Cloud computing term is used to refer to utility computing, network computing, software as a service “model, Internet applications, remote processing, autonomic computing and peer-to-peer computing.
There are people who believe that the term cloud computing is another key word used to describe too much technology, which makes it confusing for many. Cloud computing term is used to describe grid computing, utility computing, software as a service, Internet-based applications, autonomic computing, peer-to-peer computers and remote processing. When most people use the term, they can have one of these ideas in mind, but the listener can think of something else.
User forums filled with people who have a number of high-traffic Web sites with 10000 hits per day range, but if there is not high traffic, it is sufficient demand to justify the preparation of solid, capable VPS or Dedicated server. Some of these claims involved a huge cost savings from the cloud, but they show that there are usually standard monthly payments over the utility billing. It is obvious that at present the regular forms of hosting is cheaper to scale to enter a formula, then the lines will intersect and cloud computing will be a less expensive alternative.
The problem, of course is that, there is a lack of awareness. Some clouds providers have developed online price estimate tools like cost calculators, which will be considered with estimated bandwidth and storage needs that can crank out a monthly bill. That’s all you can do to get a cost comparison at the moment.
Cloud Hosting and VPS Hosting, both are Best…
Only factor matters is the Requirement….
Many would agree that the era of Web 2.0 is coming to an end. Today I will tell you about some of the next generation of start-ups and predicted the death of Web 1.0. The last few years have passed under the slogan “User generated content”, following the same – under the two new:
Let’s start with User generated applications.
One of the main reasons for the popularity of PHP is the number of free and easy to install products like WordPress, MediaWiki, phpBB, Drupal & Joomla – all of them can be easily found on most of the sites on the Internet. 90% percent of decisions are based on the differences between the configuration (design / plug-ins), thus the launching of a project essentially consists only of deployment’s.
Industry has responded to SaaS (Software as a Service). Sites like WordPress.com & Blogger.com allows you to create your own blog, where you can configure it with a standalone-decision. Example – Graceless Failures, Twitter develop blog on language Scala. Own domain and no evidence of third-party platform in the design. If the users need a familiar interface forum – no problem, look at phpBB, and before you get a simple interface for the project.
The time when you need a programmer to create an average site passes. The maximum of what you might need is a designer. The rest will take care of intelligent services.
Let’s start with the Iceberg. This application has been started in early summer and has already managed to thunder in the west the Internet. Project management & Customer relationship management are the bedrock of any company, so the Web 2.0 products met with “a bang”. Two of their major problems – the ability to customize and placing their data on the Internet. Iceberg project was created to solve them.
Iceberg follows the principles of Model Driven Architecture, in which an ordinary user can create his own application via the web interface, creating components and establishing relationships between them (such as associations has_many & belongs_to in Ruby on Rails). Great site Learning Iceberg tells the manager how he can stop kicking themselves as a programmer to automate their business processes. Unlike products 37 Signals, Iceberg can work within the local network.
Following MDA-powered application will give a start-up backed by a notorious Y Combinator. This project allows you to easily create any form, including examples of a simple bug-trackers, order forms or requests for work. The programmer is required for a week. Create your own form from a pile components and get a nice report with charts and export to Excel.
Since we’re talking about Excel, it is one of the products, which enabled to capture Microsoft’s business sector. In the demo video on their website shows how you can move their spreadsheets into Web-based interface, and in addition make it faster to admin Django or Symfony.
Here they are, the three flagship in the direction of user generated apps. MDA model has long been known in the Enterprise-world, and is now experiencing a new birth with the popularization of DSL (Domain Specific Languages). I think everyone will agree that most of the work in the small / medium sized companies (corporate Web sites and automate business processes) mainly require the gathering and customization of an existing product or routine coding functional. Over time it becomes easier to configure and use ready-made solutions that are cheaper than the contents of the programmer.
Few people wrote their ERP / CMS, but the number 1?/???????/Anything-???????????? (note: not “programmers”, namely “Reseller”) grows. In the USA, no one collects the computer itself, and with the development of virtualization, we again returned to the “mainframe”, no one is running from computer to computer installing office / network setting / … you can take a VPS with 256MB of RAM and set the desired version of PHP, but if you need to launch your blog / website, then there are a lot of services at your disposal, where you concentrate only on your problem and not doing the programming / hosting.
The computer industry is developing with huge strides and most important requirement is professionalism. How many here like to write like any school / student who is interested in computers is a freelancer, so here, the near future is subjected to be the greater impact with services like “freelancer substitutes”.
Now a Bit of Fun.
The wave of the usual blogs, forums, CMS which is the usual site and has already subsided, and if we talk about the latest news in the world of Python, Ruby, PHP and others, the main topic will be – frameworks. The main successful business on the Internet can now only be a unique idea and a quick way to implement it.
The wave of the usual blogs, forums, CMS which is the usual site and has already subsided, and if we talk about the latest news in the world of Python, Ruby, PHP and others, the main topic will be – frameworks. The main successful business on the Internet can now only be a unique idea and a quick way to implement it.
Web-services, Facebook, OpenSocial, iPhone, Android, Google Maps, YouTube, Messaging, Open Standards, Cloud Computing – here is where it will unfold in the near future “war” on the Internet. Least of all the designers have to worry about – their services would be needed for a long time, the design is not automated, but the web-masters, administrators, coders can start to think. However, professionals are always needed, but if you are coder and your work can be automated, and you do not know what is the complexity of the algorithm, then it will be difficult to retrain.
Describe all these Buzz-words, the above makes no sense, they develop quickly, as well as change our lives. However, if you have not heard about them, pay attention, suddenly it will be the “killer” of your craft.
In general, my advice to you is to watch out for the industry, do not dwell on certain things and your horizons. It is not clear how it will look, online and developer of 5-10 years. Many blame our education system that they do not teach HTML / PHP / … Lord and education teaches you to think and adapt the changes in life. PHP may not be there after 10 years, but the fact is that, regular site will remain.
HTML5- is the future update of hypertext markup language and the main method of creating content for posting it on the Web. Development of HTML stopped in 1999, with the version of HTML 4.01 and since then it did not changed. HTML5 is developed to correspond to today’s requirements.
HTML5 aims to increase interoperability and HTML to meet the growing demands of a diverse and mixed-web-content. HTML5 also aims to address the deficiencies of the fourth version. In this article we will look at 5 new and interesting things in HTML5.
Little History about HTML5
Abstract reflections on the HTML5 began in late 2003. World Wide Web Consortium (W3C), an organization which observes standard protocols and recommendations in web, expressed interest in the draft of HTML5, developed by the Web Hypertext Application Technology Working Group (WHATWG), a group formed in 2004 from the representatives of Apple, Mozilla Foundation and Opera Software . As a result, in 2007, for the development of HTML5 specification, was formed by W3C HTML Working Group.
It is expected that HTML5 will reach the status of W3C Candidate Recommendation in 2012, although most of the browsers already have partial support for HTML5 spec.
New exciting features of interest
1. New HTML-elements that allow us to better describe the contents
The primary task of HTML is to describe the structure of the web-page. For example, the text between elements <p> </ p> tells the browser that the text between these elements – this paragraph.
Adding a set of HTML-elements, HTML5 aims to give developers a better and more accurate way to describe the data.
For example, how we would describe the structure of a typical web-page present under the current specifications of HTML.
The problem with this version of the layout is such that for all elements of the browser – it <div>-elements, the browser handles all inside <div>-elements the same, because they do not see any difference between the blocks and identifiers them as content, sidebar and footer that are different from site to site.
In HTML5 you can describe the layout of the same page in another way:
In this layout the browser knows the purpose of each of the parts. The browser knows that the basic content of the page is inside a <article>, that navigate web-site is inside a <nav> and so on.
Practical use does not end on an attractive-looking and more semantic markup. These innovations will enhance the interoperability of our markup. For example, an external system such as a search engine bot can more accurately determine which content on the web-page is more important. These systems can skip the processing elements and <nav> <footer> because such elements are likely not contain important content on the pages. Consequently, well-formed HTML5 will allow search engines to better understand its content.
Crafty developer can create an application that collects alone section <article> on a group of web-sites for conservation in a database or, for example, generates a list of all video on the page, find all the elements <video>.
Software that allows you to read the text for the visually impaired, allowing users to quickly jump to the main content section. They can go directly to the element <article>, if you want to read the main content on the web-page or go directly to the element <nav>, if they want to move to another page.
2. Improved work with web-forms
Nowadays it is difficult to do without the forms on web-sites. You encounter them when submitting comments on a blog, register for a user account or send mail to Gmail. HTML5 provides the specification, called Web Forms 2.0, including rethinking how web-forms to be used. They provide web-developers with many options and new features for efficient and easy control of input fields and submitting the form.
In HTML4, the markup for this form might look as follows:
Currently, you have to use scripts to validate user input. In this example, the developer must write their own validation code (or use a previously created similar to this one) to make sure required fields were not filled in by accident or that the email is correct (this is usually done by checking with regular expressions).
In order to allow the author to handle the page with no validation scripts, HTML5 (with the current specifications of Web 2.0) provides us with additional attributes such as “required” and “email”, which automatically checks whether the fields are filled and correctly fill in email.
3. API for easy development of web-applications
HTML5 will provide API for new and existing elements aimed at improving the development of web-applications and are intended to address the shortcomings of HTML 4 in terms of opportunities for developers to create markup for web-applications.
One such API is specifically designed to work with audio and video through the use of elements and <audio> <video>. This API provides the ability to play audio and video, and eliminates the need to use third-party applications such as Flash, to display the media (at least for supported media files).
4. <canvas> Element allows you to change the image on the fly
Most people absorb information more quickly and effectively through a visual display. For example, between the table, numbered data and pie chart (pie chart), the best perception of data for the user to be a pie chart that allows the user to better understand the weight and ratio data.
Negative image that are static. If you have created a pie chart in the usual way (via an image editor like Photoshop or an application like Excel), then you will not be able to change the data without manually editing your schedules.
With the element <canvas>; you can take variable data (from databases) and use them on a pie chart, or any other 2D-map, via scripts.
API canvas also allows users to interact with elements <canvas>. For example, you can write a script that responds to a user clicks on one of the pieces of the pie chart.
5. Users can edit and interact with sections of web-pages
Section of User Interaction in HTML5 describes new ways to create interactive web-pages. Attribute «contenteditable» (logical attribute that can be true or false) allows you to observe what parts of a Web page, users can change.
This mechanism may be useful for wiki-style websites where content is generated by users. Another way of using the attribute contenteditable may be to create templates of web-pages.
This will provide a good opportunity to introduce the contents safely, without affecting the critical areas on the page, which should be filled to more competent users.
With document level, you can specify a sign of the page to edit content through attribute designMode, which takes two values: «on» or «off».
As a result,
HTML5 redefines how developers create web-markup content. This version offers the best way to describe the display on the web-page content, enable a more complex content types, improved support for media and web-based applications and increase the interoperability of HTML-documents.
Installing the source code of Google Wave Federation Prototype Server Source Code Wave Federation Prototype Server comes in the form of Java application, which corresponds to XEP-0114, and it is the Jabber Component Protocol. In the example below, we have explained how to install the Wave Federation Prototype Server as a plugin for Openfire XMPP server, but it should also work with any XEP-0114 compliant server.
To run the prototype server, you must first install the Openfire server. This guide describes the steps Openfire server for Debian (Ubuntu) systems, and if you have problems or questions regarding the installation, refer to the Openfire community on their website.
You should make sure that you have installed Java, Openfire and Wave Federation Prototype on your machine. Despite the fact that the WFPS should run on any system with Java 6, this manual describes the steps for Debian (Ubuntu) systems.
For Mac OSX install Java 6 with http://developer.apple.com/java/download/.
After installing Java, you need to create environment variables:
$ Export JAVA_HOME = / System/Library/Frameworks/JavaVM.framework/Versions/1.6.0/Home
$ Export PATH = $ JAVA_HOME / bin: $ PATH
Now go to the site and download Openfire Mac OSX version of Openfire.
Debian / Ubuntu
Installing Java 6:
$ Apt-get install sun-java6-jre sun-java6-fonts
Now download and install the Openfire server:
$ Wget www.igniterealtime.org/downloadServlet?filename=openfire/openfire_3.6.4_all.deb
$ Sudo dpkg-i openfire_3.6.4_all.deb
$ Sudo / etc / init.d / openfire restart
The configuration of Openfire (all platforms)
After installing Openfire server click on the link http://localhost:9090 in your browser. Replace the domain name to your own if you do not install on your local computer. The entire installation process will occur through the wizard and we will put the default values for simplicity.
The configuration of Openfire Wave plugin
Restart the server when finished. On Debian / Ubuntu it is:
$ Sudo / etc / init.d / openfire / restart
After restart the server log in Openfire as ‘admin’ and password that you specified.
Then go to Server -> Server Settings -> External Components.
Turn on the external add-on port 5275 and choose a secret word for rassharivaniya this component. Click Save. Now add a ‘wave’ as the confidence component. To do this, write the subdomain ‘wave’ and write again the secret word that you wrote before. Port number and secret word are used for connectivity Wave add-on.
Now go to Server -> Server Settings -> Security Settings. To «Server Connection Security» choose «Custom» and include «Server Dialback». Just check the box next to «Accept self-signed certificates» installed.
The following changes are not mandatory, but if you need a good defense, it will be good practice.
We go to Server -> Server Settings -> Registration and Login. Turn off the «Inband Account Registration». Turn off the «Change Password». Turn off the «Anonymous Login»
Include compression in «Compression Settings»
Turn off the proxy in the «File Transfer Settings»
Install the Wave
Now download the Federation Prototype Server and extract the contents.
To start the expansion you will need some of the parameters that you used, to configure Openfire server. This is the port number, secret word, name server, and finally the name of the component, which we have – ‘wave’.
Wave server requires a number of certificates used for signing. For more information, please visit Wiki
Edit the run-server.sh script with the correct settings. Explanation of the arguments:
client_frontend_hostname ip to which the client will connect
client_frontend_port port to which the client will connect
xmpp_server_hostname XMPP server host
xmpp_component_name XMPP server component Wave. In our case, the «wave»
xmpp_server_ip Address XMPP server, where we wave components
xmpp_server_port Port XMPP server, where once again, our component
xmpp_server_secret Secret Lovo component
xmpp_server_ping ping the server after you connect? If empty, then the ping will not be
certificate_domain domain certificate that we used when creating
certificate_files certificate file. (Yfghbvth username.cert)
certificate_private_key private key certificate (PKCS # 8-PEM) (eg username.key)
waveserver_disable_verification Verify certificate? true – yes, false – no
Once you edit the script server, you will need to compile and run it:
$ Ant dist
Edit the run-client.sh script (see options in the script) and then run it:
$ Run-client.sh username
P.S. I am still learning so please help me to upgrade my knowledge if anything has gone wrong somewhere in the post.
If the clouds are suspended in the atmosphere, condensation of water vapor in the sky are visible from the ground. What is cloud computing then? Cluster in the heavens?
In fact, the cloud computing is a technology of data processing, in which software is made available to the user as an internet service. The user has access to its own data, but cannot control the operating system and the proper software, you are working with (to take care of infrastructure, it is also not necessary). Directly to the “cloud” called Inet, which just hides many technical details. This is in brief. Now let’s look at everything a little deeper. Cloud Computing on the fingers.
Depends on what is Cloud Computing?
At the heart of Cloud Computing, there are a few approaches.
The first – via the Internet. Of course, there are closed systems, but as a rule, all you can touch through the network (in this case, “outside” the cloud lends itself as a normal server). The second important point is virtualization. With virtualization, users get as many resources as much as they should (and, of course, how many can afford).
That is required on the server side and how it can allocate those resources – all hidden behind the walls of virtual machines, they can run on hundreds or even thousands of servers, and often – and even in different data centers.
The third point: Cloud Computing – is the service. In the 60 years of computer, time had to pay and wait for free time. In Cloud Computing similar approach is used. And let stand in a queue do not need – all services are also billed separately. Cloud for the user – is a set of services that are consumed and paid for, sometimes without the slightest idea what it uses internally.
Take a simple example – hosting for 5 GB of data and access it via HTTP REST API. Nobody thinks for buying such service, where data is stored physically by using RAID drives and etc. The main thing that has the desired amount of data that is always accessible through user-friendly interface.
The fourth point – oddly enough, simplicity and standardization. Yes, even though clouds stands at the forefront of computer technology, it is one of the most important of its properties. Here, there are no new languages, complex configuration files and hours of sessions in a terminal to set all the demons. All that is proposed within the cloud, accessible via the most simple calls to the API and protocols. Immense popularity won the so-called protocol REST, through which all operations on the data can be made through the http-requests. However, can be used and many other decisions, moreover, ready-made libraries are available for various programming languages.
Now a simple question – why all this necessary? Answer. For small, in general, money you get access to reliable infrastructure with the necessary capacity. Uptime commercial systems tend to be guaranteed at the level of three to five nines (99.9% and above), which means no more than a few minutes – an hour of downtime per year. No need to be a genius to use this system – it uses simple and well-defined protocols and API. And what’s important – virtually unlimited scalability! Buying a regular hosting, you can not jump over your head and spike loads risking a crashed for a few hours of service. In a cloud of additional resources available on the first request. And if your script hack, password will be required for the calculation of a couple of processors, and gigabytes of memory (the girlfriend, too, read the magazine, and shut your pictures with a password), it would not be a problem. And after all relish in the fact that such resources should not buy right away, paying bills wild – the functionality can grow at any time. That’s why Cloud Computing is a real godsend for startups, whose owners can not predict in advance: shoot their project or not.
SaaS, Paas … do not swear!
Options for the provision of computing power are very different. Everything related to Cloud Computing, commonly referred to as word aaS (aaS – two letters of A, not what you thought!). Stands just – “as a Service”, ie. “as a service” or “as a service.”
SaaS (Software-aaS), or applications as services – an option which you propose to use some specific software, such as enterprise systems, as a service to subscribers. Say, the company has no ability or desire to host your internal Exchange-server for email, calendars, etc. And it could buy it remotely, taking into account all relevant circumstances. Do you use Google Docs? This is a SaaS, comes for free.
PaaS (Platform-aaS) – unlike SaaS, designed more for the end user, the option for developers. In the cloud operates a set of programs, basic services and libraries, on the basis of which are encouraged to develop their own applications. The most glaring example – a platform for creating applications, Google AppEngine. In addition, under the PaaS understand the individual parts of complex systems, like database systems or communications well.
HaaS (Hardware-aaS) – one of the first terms, means the provision of some basic “iron” of functions and resources as services. But instead of a direct lease Hosting uses virtualization. So when it comes to specific hardware, it refers to some abstract entity, similar to the real iron (a place for storage, processing time equivalent to a real CPU, bandwidth).
IaaS (Infrastructure-aaS) – it is considered that the term has replaced HaaS, lifting it to new levels. For example – this is virtualization, load balancers and similar systems that underlie the construction of other systems.
SaaS (Communication-aaS) – means that as the services provided with communication services, typically IP-telephony, email and instant communication (chats, IM).
What color is Cloud Computing?
Not so simple in the cloud kingdom, and now on the market, there are many decisions that call themselves “Cloud.” Consider the types of architectures, it is easier to understand what it is.
Approach to cloud systems vary the degree of control over the low level, which is available to the client. That is, if the lowest (zero) option, we will take a personal server on site at the ISP, where you can even rhinestones on the front panel of trailers, then everything is not so.
VDS / VPS – this is not just hosting, but also Cloud. Of course, the typical VDS (Virtual Dedicated server) has the most attributes of the cloud: they give you a virtualized environment, where you can deploy your applications or even operating system, resources are also limited to your wallet. But the similarities end and – the server’s resources are limited on which everything revolves. You also pay a monthly basis, and if I had every Friday night you need to quickly expand the server to receive crowds of visitors, no one cares.
The first level for this Cloud-and will provide a virtualized environment based on some standard “units” that the resource may be in some real server (purely for ease of comparison and counting). In fact, you’re given virtual machine that runs on the provider, but inside it has all the features to install first on any OS (the supported of course), and then configure the required software. Limitations of such machine, as already stated, are expressed in an approximation to the real hardware, but, in contrast to the VDS, it can be flexibly and almost instantly be changed up or down. Also allow one account to raise some of these virtual servers, respectively, we can create between their networks. You still do not know that there are lower level than the virtualization layer (most often used by Xen and VMware), but then you can do whatever you wants. Expansion of resources may also vary – the easiest option when you do not restrict the number of virtual servers, but their parameters are selected from several types of plans. Example – Amazon EC2, where you choose from five different types of instances. This makes it easier for the provider, but not for you – if your application is not able to scale and add new servers on the fly.
“The cloud” option implies a kind of slider (like the volume control), with which you can change the amount allocated to your server resources. It took a 1912 GB of RAM – moved the slider, and after a few seconds the server’s resources are greater.
Companies operate in the market (for example, esds.co.in), providing a cloud server and other services, such as file storage or conventional, but highly reliable data center services provider.
The developers of so-called web-OS often position themselves as clouds, although they only provide some applications (SaaS-model) – although not common, for example, a text editor, but a whole family of applications, united in common, similar to desktop operating system interface. This virtual desktop is available anytime and anywhere with web browser! Usually, vebOS work on the basis of AJAX-technologies or Flash.
Programs riding on a cloud
Clouds of the third type have the maximum flexibility and extensibility, but it turns providing not just a virtual machine or some resources, but the entire library and API. They give you the opportunity to run your own applications, often severely limiting the choice of language and additional libraries. But such an application can implement a “cherished dream of all the clouds and the flexibility to get resources on demand”. Restrictions on the virtual machine you can not see, moreover, unaware of its existence: everything from what the program works – it calls to the API and libraries that provide the service. It would seem that unless something can be done in such circumstances? And how! Generally, this degree of abstraction is now a fashion trend in IT.
There is a co-relation: the simpler language of API, in which the program works easier and more flexible than their scale. It is therefore extremely difficult to meet in the cloud systems familiar to Web developers resources, at least in the standard form. Take at least a database. Traditional SQL-relational database is extremely ill suited for scalable systems (with rare exceptions, like Oracle or DB2). Instead, they use their own design, each of which are usually very interested in the technical side as well – a third-party open solutions. One of the most popular solutions are key-value data storage and systems based on Google BigTable, as well as its public counterparts. This is very similar to a regular cache – any data that your application writes to the repository, associating them with some key numbers or a simple string, then extract or remove, enter the key. More advanced systems implement the whole structure of data, lists, queues, and even allow those close to SQL sample with sorting and filtering. Often goes to and the file system, which replace the likeness of the usual storage, supplemented by a system of map / reduce to handle large volumes of data.
These features require a review of architecture of existing applications when you want to deploy them in the clouds. It is not easy to move from conventional databases, particularly if previously wrote in PHP and MySQL.
Amazon and Google
With advances in technology the opportunity to hide behind a layer of virtualization and intermediate libraries, so that the programmer real-world applications do not have to think about. Think, for all the major languages of modernity has long enforced its own virtual machine. If a cloud infrastructure is well designed, and the language is chosen correctly, it is enough to make sure that the majority of programs (note: not all) were able to work and to scale almost linearly. However, the developer and the user will not know anything about how you run inside the top ten at all virtual servers, each of which operates on a pair of real ones. The appearance of the first serious and accessible cloud hosting from Amazon gave rise, in fact, an entire industry by revealing mere mortals the most advanced technology.
The most well-known system of this kind is the Google AppEngine, which provides a kind of “sandbox” quite limited to specific API and system service. “Sandbox” is limited in several languages – is now a Python and Java, but resource constraints sufficiently liberal to you for a long time thinking about them (claimed that the service is available free for sites with up to 5 million hits a month). Service is available as a beta, so only recently became possible to register for everyone. Prices of service for commercial use or those little limits are reasonable and comparable with competitors (as usual – Payment of hours or some abstract units of resources).
Ironically, the same service has released another “evil empire” – Microsoft Azure. It is based on a special version of Windows Server 2008, other services available to the developer, based on already proven technologies -. NET Runtime, SQL Service, Live, SharePoint, Dynamics CRM. Applications have access to all services through abstracted from the details of API, via HTTP, REST, SOAP. Judging by the inclusion of a cloud of typical business platforms, the system will be mainly focused on construction of enterprise applications and services. While there is a test, you can get totally free access to all materials.
How should I build a cloud?
Do not think that cloud stuff is only available to those who have a lot of money. While you’re far from the truth, as almost all companies do not offer anything for free, and you have to pay. But if you can not wait something to try to get in hands, I will tell you about something, that is a gift, a home-analogues Google AppEngine and Amazon EC2.
As you remember, AppEngine – it’s such an environment to execute programs (Python-e), where your script is running inside a cloud in a special sandbox and interacts with the world through the API. Resources are dynamically allocated and they are very flexible. It is ideal for various research projects and to quickly build web applications, then just can not be afraid of overloading and digg-effect. An open source implementation called AppScale can run the same program as in the original Google-crafts.
If you have a powerful computer, you can deploy such system into multiple virtual machines, simulating a cluster, or just borrow a few friends system and build a cluster in a single room. AppScale comes in the form of the image that is already configured Linux-based system that is put on a virtual machine, or Xen.
Keep in mind that we should have at least 4 servers, which means that the computer must be powerful, highly desirable – 64-bit. And – more memory, because the Xen-4 will eat resources, with an excessive appetite to build a cloud!