CDN is the abbreviation of Content Delivery Network or Content Distribution Network. CDN has always being the main point of research for the researchers of related issues in the field of Content Delivery and Distribution. CDN can be referred to a system of computers which are interconnected on the internet. CDN which has interconnected network of computer systems contains copies of data, placed at various points in a network so as to maximize bandwidth for access to the data from clients throughout the network.
CDN can also be said as the process of delivering the content to the various servers by the way of duplication. In CDN the computers connected on Internet delivers the Web content to different users or machines by duplicating the content on multiple server and directing the content to users based on proximity.
CDNs or Content Disdribution Networks are basically used by the Internet Service Providers (ISPs). The basic pupopse of the CDN is to deliver static or dynamic content/web pages. In this, the client copies the data by accessing the nearest server rather then accessing the same central server, so as to avoid bottlenecks near that server.
In CDN the content is replicated on the servers, means contents do exist in multiple copies on the server in a strategical way. When there is a need of the same content from different users a CDN can have thousands of servers to fulfill it. CDN makes it possible to provide identical content to many users efficiently and reliably even at times of maximum Internet traffic.
In CDN process when there’s any specific query like any specific page, file or program is asked or requested by the user , the webhosting server in terms of minimum nodes is closest is being detected or determined dynamically. This optimizes the speed with which the content is delivered to that user. This has definitely greater economic advantages to enterprises who expect, or experience, large numbers of hits on their Web sites from locations all over the world. Problems with excessive latency , as well as large variations in latency from moment to moment (which can cause annoying “jitter” in streaming audio and video), are minimized.
The basic bandwidth each user experiences is maximized. With high-speed Internet connections who often demand streaming content or large files the difference is being observed by most users. As there are different and numerous advantages of CDN as we have seen till now but one another major advantage is that, CDN technology is content redundancy that provides a fail-safe feature and allows for graceful degradation in the event of damage to, or malfunction of, a part of the Internet.
ASP.NET is a technology for creating Web applications and Web services from the Microsoft . It is part of Microsoft .NET platform and developed over the old technology, Microsoft ASP. At the moment, the latest version of this technology is ASP.NET 4.0.
ASP.NET apparently retains much resemblance to the older technology ASP, which allows developers to switch to ASP.NET easily. At the same time, the internal structure of ASP.NET is very different from ASP, as it is based on the .NET platform and, therefore, uses all the new opportunities provided by this platform.
History of ASP.NET
After the release of the server, Internet Information Services 4.0 in 1997, the Microsoft company began investigating a new model of Web applications, which upheld the complaint on the ASP, particularly those associated with the separation of design from content, and which will write a “clean” code. Work on developing such a model was given to Mark Anders, manager of the IIS, and Scott Guthrie, went to work at Microsoft in 1997. Anders and Guthrie developed the initial draft within two months, Guthrie wrote the original code of the prototype during the Christmas holidays of 1997.
Principles of ASP.NET
Though ASP.NET takes its name from the old technology, Microsoft ASP, it differs significantly from it. Microsoft completely rebuilt ASP.NET, based on the Common Language Runtime (CLR), which is the basis for all applications, Microsoft .NET. Developers can write code for ASP.NET, using virtually any programming language included in the kit . NET Framework ( C # , Visual Basic.NET , and JScript. NET ). ASP.NET has a speed advantage compared to scripting technologies as well as the first access code is compiled and placed in a special cache , and subsequently only be executed without requiring the time consuming parsing , optimization , etc.
Advantages of ASP.NET to ASP
ASP.NET has a speed advantage compared to other technologies based on scripts.
Here you can give some comparison. Thus, ASP is a derivative of Win32, XML and HTML; PHP – from XML, HTML, Java, and CDI, then ASP.NET – from HTML and .NET (XML and XAML, respectively). In this case, if you created Rich Media Application with Flash, it is now done using the Silverlight module, as ASP.NET. is a rich environment for developing and deploying Web-based resources. In ASP.NET, you can work with any .NET language, even Managed C + + and Visual Basic, that allows us not to think about switching to C #.
In addition to the functionality of the portal, the installation of WSS on the server provides access to a complete object model and set of API, the underlying technology of Microsoft SharePoint. Here is a collection of Web Part , which can be embedded in Web pages to provide such a functional SharePoint, such as a preference panel, the possibility of processing documents, lists, notices, calendars, contact lists, discussion forums and pages, wiki pages.
WSS is available for free download from Microsoft for Windows Server 2003 Service Pack 1 (or later) and added to the Microsoft .NET Framework.WSS exists in versions for Microsoft server platforms and cannot be used on any other OS. Packages to download WSS 3.0 incorporates the fundamentals of the package and a set of “Application Templates” to add functionality to the base installation.
WSS technology is the core of several commercial portal technologies. In particular, WSS 3.0 lies at the heart of Microsoft Office SharePoint Server 2007 (MOSS), and WSS 2 was the framework for SharePoint Portal Server 2003.
Overview on WSS 3.0
Windows Sharepoint Services (WSS) includes several main components:
Underlying framework, which includes an object model, a system of permanent storage and backup of content and configuration databases, SQL Server, and ASP.NET controls the presentation of content Managing Web Site Management “web-space” (web farm), consisting of one or more servers, which hosts one or more websites.
Built-in templates sites and pages that can be used to quickly create your own sites and adding pages and libraries into existing websites.
Website Management by web controls, edit the structure of the website, modify the schema and contents of individual pages, create new workspaces and lists, and edit the schema of existing facilities.
Indexers content database that collects information about its contents for faster search collection of additional site templates and pages, including additional logic in the code, known as the “Application Templates“.
Configuration wizard help you to make the initial setup of the site within minutes. SharePoint provides content services through IIS Web sites. They can use either Microsoft SQL Server or Windows Internal Database to store its data.
Websites can be configured to return specific content for the intranet, extranet and Internet network. In WSS such deployment does not have licensing restrictions, as the WSS software is free, but requires a license to run a commercial portal products of Microsoft.
Multiple servers can be configured with WSS as part of a “server farm“, that allows them to combine the configuration and content databases. Server space can consist of a single server, or combine hundreds or thousands of servers. Each server is intended for load-balancing scenarios, or to store individual pieces of content. The data in the space can be divided into 9.900 content databases. Data replication space is controlled by the possibilities of SQL Server ‘s replication and clustering.
SharePoint uses the model that is similar to groups of users in Microsoft Windows . This is implemented through the Active Directory. On the other hand, the other authentication mechanisms can be added through HTML forms authentication.
Downloading and Installation of WSS 3.0
WSS 3.0 can be freely downloaded from the Microsoft website and installed on Windows 2003 Server Service Pack. Windows Sharepoint Services 3.0 Application Templates are available separately on the website of Microsoft, including additional templates.
WSS 2 is still available for free download from the Microsoft website and can be installed on Windows Server 2003 or later editions.
History of WSS 3.0
The first version, called SharePoint Team Services (usually shortened to the acronym STS), was released with Office XP and was available as part of Microsoft FrontPage .
STS could run under Windows 2000 Server or Windows XP .
Windows SharePoint Services 2.0 was released as an update to SharePoint Team Services, but completely overhauled the application [ citation needed 377 days ]. SharePoint Team Services stores the document as a regular file storage, while maintaining the metadata of the document in the database. And Windows SharePoint Services 2.0 stores the documents and metadata in a database, and supports basic version to control the elements in the document library. Service Pack 2 for WSS adds support for SQL Server 2005.
Windows SharePoint Services 3.0 was available to the public on November 16, 2006 as part of a set of Microsoft Office 2007 and Windows Server 2008 . WSS 3.0 was developed by using .NET Framework 2.0 and .NET Framework 3.0 Windows Workflow Foundation to add capacity to monitor the series of actions performed in the basic package. In early 2007, WSS 3.0 was released for the public. Windows 2000 Server is not supported on WSS 3.0.
WSS version 3.0 is great, “growing up” product. Version 3.0 supports more features, the most commonly used in the decisions of Web 2.0 , such as blogs , wikis and RSS -flows.
Microsoft 4.0 or later changed its name to SharePoint Foundation 2010.
Features of WSS 3.0
By default, WSS 3.0 package includes the following features:
After installing the packages separately, it is downloadable as a Windows Sharepoint Services 3.0. Application Templates, become available to the following additional features:
Technical details of WSS 3.0
Using ASP.NET, Web Parts in Sharepoint Page, SharePoint is built by combining a Web Part into a single page, which can be accessed through a browser. Any web editor with support for ASP.NET may well be used for this purpose, although the use of Microsoft Office SharePoint Designer is preferable. Degree of customization page depends on its design.
WSS-pages are ASP.NET applications, and Web Parts SharePoint use the Web Part infrastructure ASP.NET , and using sets of API ASP.NET Web Parts can be rewritten to extend the functionality of WSS. Speaking of programming terms, WSS provides API and object model to programmatically create and manage portals, workspaces and users. In contrast, MOSS API more tied to automate tasks and integrate with other applications. As WSS and MOSS can use the API Web Parts to improve the functionality of the end user. In addition, WSS document libraries can be opened through the connection ADO.NET for programmatic access to the files and their versions.
How the Web requests are handled in WSS 3.0
The Web server WSS configures IIS to forward all requests, regardless of the types of files and content, session, ASP.NET treat web application WSS, which either performs a final check of the final file that is available in the database, or perform other actions. Unlike conventional ASP.NET , file .aspx containing the application code WSS (and MOSS), is placed in a database of SQL Server instead of the file system. Thus, the normal execution of ASP.NET can not handle the file. Instead, WSS connects a special component (Virtual Path Provider) in the processing of ASP.NET, which selects the .aspx files from the database for processing. With this capability, provided in WSS 3.0, application and as well as data, that is generated and managed, can be stored in a database.
As already mentioned in the previous post, the basic and mandatory requirement for the use of virtualization technology with Windows Server 2008 Hyper-V is a processor, which must have a 64-bit architecture and the need – hardware support for DEP and virtualization (Intel VT or AMD-V). Other requirements are highly dependent on the tasks that are planned to perform. Processing power, memory and disk space should be selected on the basis of the necessary facilities to run the required virtual machines for all applications. Sizing the virtual environment – also very interesting and a great theme, and maybe soon I’ll write about it.
Here are the ways to increase the reliability of components, e.g., hard disk drives used in servers have a much longer MTBF than those used in home computers. The another way – redundancy: all, or a particularly critical components are duplicated, for example – hard drives work in “mirror mode” (RAID1), and in case of failure of one hard drive – the server continues to work on the second disc, and realizes that it is only a system administrator but not users of the system. It should be noted that these two pathways are not mutually exclusive, and vice versa – complementary. It is clear that any increase resiliency automatically leads to higher prices for the whole system, so it is important to find a middle ground. First and foremost, you must evaluate, to some damage in monetary terms that can lead to system failure, and increase resiliency in proportion to that amount. For example, the failure of the hard drive on home computer, which stores only some photos and a bunch of different “information” will not lead to great disaster, the maximum is to pay for a new hard drive. I would rather save important information, such as another hard drive or DVD-RW.
In addition to individual server components – hard drives, memory modules, etc. – Can be backed up with the whole server. In this case two or more servers are operating in a group, and the user is presented as a single server that handles some custom applications, and responding. General information about the cluster configuration is stored on some kind of shared disk resource that is referred to as a quorum. Cluster require continue access to this resource for all the cluster nodes. As the quorum resource can be used by data storage system with interfaces iSCSI, SAS or FibreChannel.
In case of failure of one of the servers (they are called “cluster nodes”), custom applications are automatically restarted on the functioning nodes and the application either does not terminate or terminates in a fairly short time to just not entailed large losses. The process of moving applications from the failed node to a workable called Failover.
In order to timely identify bad sites – all nodes in the cluster periodically exchange information among themselves under the name “heartbeat”. If one node does not send heartbeat – this means that the failure occurred, and the process starts with Failover.
Process Failover – transport service from the failed node
In some cases, depending on the settings in the recovery efficiency Faulting application node can be moved back at him – a process called Failback:
Process Failback – Transfer service after disaster recovery site
Requirements for creating a failover cluster in Windows Server 2008
So, what do we need to create a cluster in Windows Server 2008? First, we need a shared disk resource that will be used as a quorum, as well as for data storage. It can be any data storage system (SAN) protocols are supported iSCSI, SAS or FibreChannel. Of course, all the nodes in the cluster must have the appropriate adapters to connect the storage system. All servers that operate as nodes in the cluster must have completely identical hardware (this is ideal), if not, then at least – one processor manufacturer. It is also very desirable that the operation of the cluster is desirable that all nodes in the cluster communicate with each other more than one network interface. It will be used as an additional channel for the exchange of heartbeat, and in some cases – and not just for this purpose (for example, when using Live Migration). If we’re going to use virtualization – all units must also meet the system requirements for Hyper-V (in particular – the processors).
Complex solution: virtualization + failover cluster
Thus, as already mentioned – a solution based on virtualization can be deployed on the platform of a failover cluster. What do we give it? It’s simple: we can use all the advantages of virtualization, while getting rid of the main drawbacks – single point of failure in the form of the hardware server. In case one of the servers, or any planned outages – replacement of iron, installing OS updates to reboot, etc. – Virtual machines can be moved to a workable site quickly enough, or even invisible to the user. Thus, the recovery time from a system failure will be measured by minutes, and scheduled shutdowns of servers, users will not notice at all. The disadvantage here is one: more expensive systems. First, it will probably need to buy storage that is certain, and sometimes a lot of money. Secondly – at least one other server. Thirdly, to work in a cluster will need a more expensive version of the OS – Enterprise or Data center Edition. In principle, this is compensated by free right to run a certain number of guest operating systems (up to 4 on the server – in Enterprise and without restrictions on the cloud web server hosting – in Data center), or else you can use a free product – Microsoft Hyper-V Server R2, in the R2 release, which support clusters.
Ways to move virtual machines between nodes in a cluster
So, let’s say we have a cluster running virtual machines. Recently users started complaining that he was not enough system performance. Performance analysis has shown that applications do not have enough RAM, and sometimes – CPU power. It was decided to add the server several modules of RAM, and an additional processor. Once the processor and memory modules come from the vendor – the problem arose, in fact, produce a replacement. As we know, this requires the server off for a while. Nevertheless, users need to work, and simple, even 10 minutes is fraught with losses. Fortunately, we have previously raised the cluster, and therefore remain after the work we do not need, but you just have to move running virtual machines to another server.
How can this be done? There are three ways:
1) Move - simply moving virtual machines from one host to another. Itself a virtual machine at this translated into a state of Offline (through the completion of the work or the preservation of state), and then – run on another node. Least a simple way, but the most lengthy and “sensitive” to the user. Before moving to notify all users so they can save their data and exit the application.
2) Quick Migration – the contents of RAM is stored entirely on disk, and then on the target host is running a virtual machine with the restoration of the contents of memory from disk.
3) Live Migration – one of the most exciting new technologies of Windows Server 2008 R2. Live Migration, is a direct copy of the contents to a memory of virtual machine on the network from one host to another, bypassing discs. The process is somewhat similar to the creation of shadow copies of files open (VSS). The whole process restart in a second, less than the timeout of TCP-connection, so users do not notice anything at all. Thus, all scheduled maintenance work requiring shutting down the system can be carried out in normal working hours, without distracting users from their work.
It is also necessary to note that any of these methods, and Live Migration including – are regular features of Windows Server 2008 R2, and does not require the use of buying any additional software and licenses , each set running in a virtual machine also requires a license .
Server virtualization allows a single server, where before you had to use ten. Of course, this will allow a good save on virtually everything: on the hardware, software and licenses, and on overheads. Nevertheless, when using virtualization drops sharply overall system reliability. In this article we learned how to improve the reliability of the system through the use of failover clusters. The following article is entirely devoted to one of the “highlights” of Windows Server 2008 R2 – Live Migration.
Virtualization in computing – the process of on-set of computing resources, or their logical association that provides any advantages over the original configuration. This new virtual view of resources is not limited to the implementation, geographic location or physical configuration of component parts. Perhaps it sounds too complicated for an untrained person, so try to be translated into “human language”.
Development of solutions based on virtualization, many vendors do, I would say – pretty much everything. After all, the logical drives, which are just partitions on a single physical hard drive – is also virtualization. SMP Technology, which allows programs to submit two or more physical processors as a single virtual – is also virtualization.
It is distinguishes between three aspects of virtualization: server virtualization, presentation virtualization and application virtualization. With virtualization concepts, they are more or less familiar to almost all system administrators: the most striking example of it – Terminal Services Microsoft Windows Cloud Hosting Servers. Application virtualization – the creation of a special, isolated environment within the operating system to run individual applications, the theme is very big and interesting, for a single long article. Here and below the word “virtualization,” we shall mean server virtualization. Windows Server 2008 adds built-in support for virtualization (hypervisor), called Hyper-V. In Windows Server 2008 R2 hypervisor was substantially modified, and became known as Hyper-V 2.0.
Let’s take a closer look at server virtualization. What is it? Speaking in plain language – is the creation of software-emulated environment, which imitates the hardware of the physical computer: CPU, RAM, hard disk, I / O devices. Further, on this virtual server can be running (to be called “guest», «Guest OS»), and some applications. All this will work as a full server, but it is not clear: it will exist virtually, inside the OS on a physical server (for this OS uses the term, Host OS). Moreover, within a single physical server can run simultaneously on two or more, and sometimes – even dozens of virtual servers.
For what it might be useful? Initially, the virtual machines are used only for testing purposes: to conduct experiments with them is much easier, quicker and most importantly – cheaper than a real server. I’m sure many sysadmin ever in my practice experience anything in a virtual machine. But now virtualization has become increasingly used in industrial applications. The fact there are substantial reasons, although as in every decision – have a place to be and disadvantages. About it – more.
Advantages and disadvantages of virtualization
The most important thing: the use of virtualization allows a more rational allocation of hardware resources of servers. Indeed, because most servers on the strength of 10% of its resources – processing power, memory, etc. Virtualization allows you to place a few little busy servers to use one server to be loaded a little harder. It is clear that a single server, even a little more powerful – will be cheaper than some separate.
Likewise it is logical to assume that one server will consume much less power and occupy less space in the rack (or on the desk or under your desk – well, that’s who – like). Another very important advantage – ease of administration. Any administrator is often faced with the need to go into the server and make some kind of manipulation directly on the console of the server itself, when it “fell”. The use of virtualization allows you to access the consoles of virtual servers directly from the administrator’s workstation, and the need for trips to the server virtually eliminates the addition – are greatly simplified backup and disaster recovery of servers. All administrators know how it is difficult to make a working backup system partition server: it is often necessary to buy additional software and in some cases – to restart the server. The use of virtualization allows you to backup server disk directly on the fly, transparent to users, and recovery is reduced to just copy a few files.
But, unfortunately, any stick – always double-edged sword, and besides all the advantages, the solutions based on virtualization has a significant drawback: lowering the overall system reliability. Indeed, since the same physical server simultaneously run multiple virtual machines – that the failure of the server itself (for example, “burned” processor or RAID-controller) will lead to the simultaneous failure of all virtual machines running on it, and accordingly – all services that they provide. So, together with the solutions based on virtualization is advisable to use fault-tolerant solutions, in particular – on the basis of fault-tolerant clusters. More information about this question below.
There is one drawback, on the only virtualization on Windows Server 2008: The hardware requirements include a 64-bit processor with hardware support for virtualisation and DEP. So, a lot of older servers with 32-bit processors, we simply do not fit. Nevertheless, to buy a server that does not meet the technical requirements of Hyper-V is currently difficult, because the servers with the old models of processors were recently removed from production by all major service providers.
Clouds consist of a huge amount of particles of water vapor, reaching hundreds of millions. Clouds do not have central control and basically go where the wind blows. With this perspective, the large number of client computers and servers on the Internet along with many different purposes and entities that direct control of its progress are like clouds. Add this to the wireless data revolution that the cellphone companies have brought us and really feels like we are all covered by a cloud of invisible computing power.
Since early electronic computers, there is a clear division of manpower between four main functional parts of a computer:
The first three parts put the “computer” in computers. It is the fourth part, where the assets of important data are usually stored, that has changed more radically with the advent of cloud computing. Important data assets reside in nonvolatile memory to be protected against power outage – regardless of whether the loss was deliberate. Generally, non-volatile memory devices are hard disks, but can also be solid state devices, such as secure digital cards (SD) and even tape devices (almost obsolete now). But these storage device has its limitations.
Time passes, the technology progresses and brought the computer networks, in which the assets of an organization’s important data can be centralized on a computer shared by several terminals and which can back up regularly, as a basic function needed IT . This model (known as the mainframe model ) offered many advantages – one being lighten the load that each site had terminal. Small offices with little more than a small terminal (keyboard, mouse, monitor and PC) can access gigabytes of corporate data and processing power of large mainframes without cluttering up the place – from which they were connected by wires.
The next major paradigm shift came with the network of networks known affectionately as the Internet, where computer systems absolutely gigantic (LANs) can serve large populations of small terminals anywhere in the world where a dish can be placed. The wireless remote nature of this configuration is known as the cloud.
Then came the personal digital assistants (PDAs), mobile phones and smart phones, for which the miniaturization of computers has progressed to the portable form factor, we know and love so much. Suddenly, intelligent terminals are in the hands of countless millions of productive people, producing and consuming information at prodigious rates.
From mid to late 90s, e-mail and World Wide Web were the most popular applications that dominate the cloud. Most people interacted with the cloud using a Web browser and found that the Internet was a relatively simple application. With the advancement of commercial successes such as Yahoo and Google servers, network connections and terabyte hard drives replaced as the local non-volatile storage devices preferred. Like many visionaries predicted progressive thought, the cloud has become a modern utility, like water, telephones and electricity. Using the digital mobile phone network as an Internet Service Provider (ISP), the cloud grew and uses millions of portable devices as the main tool to display the data residing on these servers.
As the clouds move and are changed by the winds of change, so did the paradigms under which these devices operate – the terminals have become smaller, more powerful and more portable, while the servers, similarly, were more powerful and better able to meet the needs of data users through virtualization software and the measurement of their use.
Companies no longer need to maintain large and expensive “parks” of servers 24 hours a day when there is another less expensive alternative: hiring such services through fully managed cloud hosting providers. Through virtualization, applications that previously ran on custom environments may be duplicated or “having created image” to run in the cloud server provider. And with the proper measurement of these services, the company will not need to pay high prices for those times when their services are being used minimally.
As hardware technology has moved on, the same occurred with software, and saw the creation of new applications. For example, location-based services that map businesses near the site where the cell tower or the global positioning system (GPS) has determined that you are. New markets tailored for download and testing programs and data files useful, as the Android Market. No doubt we will see further advances in these new unique applications of cloud computing – for example, companies could sort and select regional contact information, then automatically download the list of days of random calls to mobile phone based on Android representative regional sales.
Cloud computing, where mobile devices balance powerful servers, needs an operating system that makes the most of the system architects and developers can do on a small client computer. Android is the operating system.
First, Android is a software stack for mobile devices. This means that top the list of priorities is the preservation of battery power and the efficient management of limited memory resources. There are five distinct layers to the stack of Android:
The core of the Acorn RISC Machine (ARM) Linux form the foundation upon which all other layers are. Linux is a proven technology that is highly reliable and the ARM processor family is known for high performance at very low power requirements.
The libraries provide low-level code reusable and sharable for basic functions such as functions, codecs – software for encoding and decoding of digital video and sound – rich graphics for presentation on a small monitor, support for Secure Shell for TCP / IP traffic encrypted the cloud as well as support for Web browsing component (WebKit), feature SQL database (SQLite) and standard C library functionality, expected in a Linux system.
The bytecode interpreter runtime Dalvik, very similar to the bytecode interpreter in Java ™, includes some distinct features that define uniquely the model of safety and energy conservation Android. Any application currently running, for example, has its own user ID and your own copy of the interpreter running processes to separate rigidly for safety and reliability.
The Android application framework lets you utilize and replace components as required. These high-level Java classes are highly integrated components that define the Android API.
The main applications include the Android WebKit browser, Google calendar, Gmail, Maps application, SMS messenger client, and an e-mail standard, among others. Android applications are written in the Java programming language and you can download many more Android market quickly.
Each Android application can be further divided into distinct functional units:
Activities are the modules of an Android application that spread out the base class Activity and define an interface that are based on a view that responds to events. If an application consists of three windows (for example, a login window, a window for viewing text and a file viewing window), each one is usually represented by a different class Activity.
The Android keeps a history stack for each running application from the homepage and you can click the Back to scroll back through this history of activities.
Attempts, such as activities, are a special class of application code that describe and define what an application wants to do. Attempts include a layer without targeting that allows sophisticated reuse and replacement of components. For example, an application can display a button labeled clients that, when clicked, displays a list of contacts who are their clients. And here enters a lack of direction: no need to use the default viewer for those contacts, but instead, you can substitute a different viewer.
For some applications, this can be a very powerful function of application integration. Possibly a topographic map is best for a specific display the default map view.
Classes as a BroadcastReceiver define code that runs when external events trigger the same. Events such as the firing of a timer or the touch of a phone can be tracked that way. Typically, such code does not display a window, but can use the class NotificationManager to alert the user about something that needs attention.
A service is an application running on a low level and without a monitor or UI. It is usually an application that should run long in the background. A perfect example is a software media player playing a song list. Despite a media player application to present a UI that allows users to adjust their playlists, the program passes control to the service to really play the songs from the playlist provided.
The Android security model let’s programs have only their own data. If programmers wish to share data between queit a few different programs, they can set content providers for this purpose.
The power of free software should never be underestimated, as the power of free people and creative. Free of proprietary APIs and interests of businesses that generally obstruct progress in software engineering, the Android platform has a large developer community and very active, whose combined talents really make the sum greater than the parts.
Want to empower your programming career?
Learn how to program on the Android platform and, someday, will have millions of mobile phone users in its potential market – some of which may need their program.
The heart of Android is the Linux ARM. That in itself should inspire confidence in the ability of the importance of this platform to grow rapidly. Linux is a fast and secure operating system with many thousands of programmers familiar with it. Many Linux based systems are known to have an uptime of consecutive years online and view applications in the cloud – and that really defines reliability.
Development Environment: Eclipse, Windows, Linux
The developers have several options in relation to Android development environment. They can use the Microsoft ® Windows ® XP or Windows Vista ®, Macintosh OS X (v10.4.8 or later, x86 only) or the Desktop Linux (preferably Ubuntu). There is a software development kit (SDK) that can be downloaded to all these operating systems commonly used.
You can use a GUIIDE like Eclipse or NetBeans, Sun Microsystems, method or the famous “real programmer” from the command line and a default editor. The choice is yours.
Organizations can also put programs and data owners in Android, thanks to the types of software licenses used. This prevents them from escaping the platform, but does not limit consumer choice. It’s a winning combination for all parties.
There are several resources in the cloud to develop Android, including several community forums and wikis Android, as well as several blogs related to Android programming. The main company that directs the phenomenon Android, Google, is fundamentally a communications company and therefore has many useful forums (known as Google Groups ) for communication between developers of different skill sets and talents.
The links section of Android Resources cannot even begin to show the vastness of the universe Android. Cloud computing – and Android, specifically – is something very, very, very interesting.
Time will tell what new innovative applications of cloud computing will be created in the next few years trying to overcome with all the creations of the great developers who preceded them.
XML (Extensible Markup Language) is the standard used to exchange data across applications. It allows a universal way to exchange information between companies. Its structure makes it perfect for online applications and works with data that is in the data source location, or to remote data source.
XML is used as the standard language of communication in the context of Web Services. It works regardless of programming language or operating system used in the implementation of applications. Without it would be virtually impossible to allow disparate systems communicate with each other in a flexible and dynamic manner. XML Extensible Markup Language and its extensibility feature allows you to create new tags as needed. This standard identifies the types of data and organizes it in the way that makes it possible for a computer to analyze or issue a report. The XML standard is used increasingly because of its versatility, going from OS to the programming language.
The concept behind XML is obviously not something new. The XML is in a good set of specifications called SGML, developed by the W3C in 1986. The W3C began developing the standard for XML in 1996 with the idea that it would be simpler to use than SGML, but that would have a more rigid structure than HTML. Since then, many software vendors have implemented several features to XML technologies.
The popularization of the Internet facilitated the emergence of numerous isolated applications, where communication between them is mostly for digital media. This context describes the need for a consistent and interoperable standard for data transfer and it has been the basis for several (approximately 500) definitions of language. In terms of consistency, the W3C has made it clear that the future of markup languages will be based on XML.
XML was created not as a special purpose language, but as a” mother language “, a generic meta-language. The goal was its definition of Extensibility. The XML data contains self-sufficient in paper format. So it is platform independent. With its use, it is easy to transmit a document from one site to another via HTTP. More precisely, XML is used in communication and information exchange between disparate applications. One of the main purposes of XML is the storage and transaction data between companies, ie, the Business To Business (B2B).
XML is largely executable and easily developed, and is considered of great importance in the Internet and intranets in large, it provides inter-working between computers through a standard flexible and open, independent device. With it, applications can be built and updated more quickly, allowing cross-platform viewing of structured data.
XML has become the universal protocol for transferring data between sites via HTTP. The trend is that HTML remains the language used to display documents on the Internet and that the effectiveness of XML is increasingly used to transmit, exchange and manipulate data.
Why do companies like Amazon.com, Sun, IBM, ESDS, and many others are developing Cloud Computing infrastructure?
To try and understand it, going back 100 years ago and see how was the evolution of electricity.
Soon after the invention of electricity in the late 19th century people (and companies) have built their own sources of energy. People realized that it was easier and cheaper to buy energy than having the responsibility to keep their generators and taking care of all infrastructure required to have an own generation of electricity.
Perhaps the time is coming when the people think to keep their own data centers (unless this is the business of your company).
Each company with data centers?
Because every company or business (regardless of size) must have its own server farm to host their applications, their ERP, CRM, their databases.
Some of the small and medium enterprises of our servers are used in part: only about 20%, consuming 100% of energy. This is a problem. The bad news is its going to get worse. We will not have an inexhaustible source of energy that will enable rapid growth of the number of servers and PCs in the world.
Green IT initiatives and other help, but the concept of Cloud Computing, in my opinion is what can actually optimize the utilization of software, hardware and infrastructure, more effectively. Each company or individual will use only the resources it needs, at the time that is needed.
I have a friend who has in his house (he, his wife and daughter, 10 years): 03 laptops, 02 PCs working as robust home server, so many monitors, external hard drive, 02 broadband connections to meet the demands of their jobs … … and I think rightly, to consolidate some of the machines on a server. If this is a reality in many homes of middle and upper class, imagine how much the hardware base is increasing in business.
Is Cloud Computing ready to be used in companies?
Let’s see what this study says.
Cloud computing is a new IT outsourcing model That Does not yet meet the Criteria of enterprise IT and isn’t supported by most of the key corporate vendors. It’s wildly popular with startups, Exactly fits the way small businesses like to buy things.
Applications that can benefit from Cloud Computing:
The infrastructure needs to improve (a lot!), Especially here in India.
How companies can ensure availability of services?
Look what happened to one of the most popular services and one of the best examples of Cloud Computing:
And if your business depended solely on cloud hosting services, the future of processing enterprise seems even to Cloud Computing.
Cloud Computing is related to:
1. First and foremost the idea is to use the metaphor used to represent the Internet (a large cloud blah blah blah …) with the term “computing”.
2. Bingo! There is a light! Cloud Computing = + virtual servers available on the Internet! Some analysts define the term as a simplified version of Utility Computing, with servers in the “large network”.
What is the purpose of cloud computing?
Demands an answer and (one of) the dream of every CIO: increasing the processing capacity on the fly “without the need for new investments in infrastructure, staff training, purchase of additional software licenses (best part) etc.. Is related to services you pay for what you use in real time, extending the processing capacity of its (always limited) data center. But what Cloud Computing has to do with SaaS?
You can relate Cloud Computing with:
SaaS: A type of cloud in which a system / solution is available via a Web browser to thousands of customers through a multi-tenant architecture (a single instance of a software “serving” for multiple client organizations).
Utility Computing: this is what organizations like Amazon.com, Sun, IBM and ESDS are providing – virtual servers and storage that companies can use on demand
Web services in the cloud: a concept very close to SaaS providers offer web services APIs for developers to exploit the functionality of information systems and databases.
Platform as a service: another variation of SaaS: This type of Cloud Computing offers an entire development environment. From this environment you customize your applications (eg. ERP). A good example is the Force.com platform of Salesforce.com.
With all this supply of “Cloud Computing and Cloud Hosting Services“, I even suggests a new term (which makes sense), “Sky Computing” several “clouds of services” where users can plug in and invoke services in isolation.
SaaS model has gradually begin to gain traction and we see more examples every day of their adoption worldwide. In some countries the model is more widespread than others, but overall, it is already on the agenda of business executives and IT.
A recent survey conducted by an IT industry analyst, with CIOs worldwide showed that in 2014, 53% will opt for the SaaS model when acquiring new software. The current model, on-premise is 47% of preferences.
Another practical example of the attention is the topic that has generated many questions and doubts I hear when debating about Cloud Computing. I collected some of these issues and I will share with you.
First of all it’s worth remembering that the SaaS model already has at least ten years and is a natural evolution from the old ASP (Application Service Provider). The news is the rear of SaaS technology, based on cloud computing, which promotes technological model multi-tenant, or a single copy of the code shared by many customers, contractors and accessed remotely via self-service portals, in a structured commercial business model pay-for-use or subscription, like a cellular line.
Generally, SaaS product has the following characteristics:
Based on services:
This means that the client software wants only to enjoy its features and wonders of the technical characteristics that are behind, as the hardware platform, database, etc.. The client wants to consume software as a service.
An analogy: he wants the laundry and not the washing machine.
SaaS provider has to have the computer capacity to allow growth in the number of users without affecting the performance of their current base. In practice, this is a significant issue to be considered when evaluating SaaS providers.
Economy of scale:
The SaaS model means sharing of excellence and service providers have to implement a multi-tenant architecture to achieve an adequate economy of scale. The more users can share resources, greater economies of scale obtained, and faster depreciation of investments will be the provider. With that, he can offer beneficial ownership costs compared to the traditional on-premise.
Commercial model pay-for-use or by subscription:
The customer of a SaaS product is not acquiring a software asset, to be installed on your servers, but hiring a signature to use for a specified period of time, paying for what they consume, such as an electric bill. And as a cell, if the service does not suit you, it can at least theoretically, replace it with another.
One question that I always hear is, if this model is pay-for-use is real or a sales pitch … There are business models where you pay monthly so consuming, and there are also models where you pay in the first year of subscription to use the software in advance. One example, in a contract for three years, you pay 1 / 3 in the first year upon signing the contract and the remaining 2 / 3 within two years. This model resembles the traditional on-premise where you pay in advance for the license to use the software and subsequent servicing. One difference is that there is software maintenance by the user. All this work is in the rear of the provider.
This model has been encouraged by the software industry because it facilitates the transition of software companies that make money with the current model of licensing for SaaS.
A question I hear and caught my attention is whether SaaS caters to corporate systems or works only at the periphery, i.e. in small departmental systems. I have no accurate data, but a study shows that, on average customers have somewhere around 50 users.
And this issue is linked to another interesting question. If business executives must recruit SaaS bypassing the IT field, would this create a bomb? Yes, because sooner or later these applications have to integrate with those who run the company’s servers or even with other clouds and integration that is still a challenge for the cloud host. At first, everything is wonderful, but as more and more SaaS applications are spreading uncontrollably, bypassing the IT field, the potential problem will be great. I would say that, IT departments should take control of the process and define the rules for selecting and hiring SaaS, ensuring the integration of applications.
Another question was “The SaaS model will end up with on-premise model?“.
In my personal opinion, at least in the next ten years we will see the two coexisting models. There is much legacy software that will take years to migrate to the cloud. But the SaaS model will be continuously spreading, and sometime in the future might not call it over SaaS, it will be the dominant model. But still there is room for on-premise.