The term cloud computing came from the fact that computing has changed much of its focus. Nowadays, for example, people are not as worried about buying a super computer (the powerful super system – speed, memory, processing) as it was the dream of most time ago. The case is that the needs have changed and thus people have adapted to them. All give preference to practicality, which explains the huge rise to devices that have greater mobility and portability.
Sure it’s not the end of computers, there are many people who need more power for different destinations, but as almost everything is based on the Internet, common users can do with machines that have a better cost benefit for consumption.
So cloud computing is based on the use of memory, storage capacity and calculation between computers and servers that are shared across the internet. Thus, all data can be accessed from anywhere in the world, and you can have space for personal files (photos, texts, videos and music) and no need to install software, since the software will be available in online mode.
There are risks for companies that develop them (Microsoft, for example), operating systems, these manufacturers need to adapt and try to contribute to the “cloud technology” – migrating to web and creating online machines – not to lose audience and not have future problems.
The computer prices fall, so as also the cost of the Internet.
A company that is already well ahead of the research on this subject is Google. It manages to integrate a lot inside their system, examples:
Transitions by definition are complicated. In nature, a transition can take thousands of years and extinguish an ecosystem, an organism or even a species. Today the market of Information Technology undergoes one of its biggest transition since the mainframe era to lower platform. For the first time, a revolution occurred out of the company, with technologies such as broadband and smartphones democratizing access to information and bringing new paradigms.
One of the biggest tentacles of this “user revolution” is the advent of cloud computing. Services that were previously only possible for large corporations are available to small and medium-sized businesses, professionals and consumers with competitive prices.
Now, a small business can have a server running the latest version of the most powerful email software, allowing a group of employees can easily access documents, appointments, and files via any Internet-connected device with small business cloud hosting plans.
To adapt to this new reality, the IT professional needs to change the way they sees the demands. For them, it does not matter what software is used and where it is located or, in the cloud or on a local system, what really matters is the data. Following this way, the service will be better and more agile, enabling IT to control the whole process of information security, regardless of the method of user access, either by smartphones, tablets, desktop applications or web services. Thus, the Information Technology business area will help to face the new market challenges.
In the short term, some markets may not adopt cloud computing. Sectors where information is highly confidential as the financial market and security agencies tend not to use the Cloud Computing. For these markets, the cloud brings more questions and doubts than facilities. However, these companies can make use of “private network” or “private cloud“, which is a secure, managed by IT, and without access to the public Internet. Already initiatives are found in the market in this direction.
In the age of the user as King, IT needs to show that it is ready for the battles that will arise.
According to Gartner analysts, By 2020, the Network will connect 26 billion devices because of which, things will require a serious rethinking of approaches to managing data center facilities in the rapid growth of data.
According to recent studies, by 2020 about 26 billion different sensors, devices and other equipment will be connected to the Internet. The rise of the Internet will lead to the emergence of a huge amount of data that will need to be processed and analyzed in real time said Gartner Research Experts. Data processing on the Internet in real-time will lead to a corresponding increase in the load on the data center facility providers, putting providers facing new challenges in terms of security, resource requirements and analysis of information.
Internet provides remote access to resources and flow of data between these resources and a centralized control system. Resources can be integrated into new or existing enterprise system, and organization based on the analysis of data can get real-time information about their location, current status, supported functionality, etc.
Increase in the number of network connections and volumes of corporate data will lead to increased demand for management of distributed data centers, it is believed at Gartner. Trend in recent years, according to which many large enterprises have sought to centralize its operations with the data center will change to reverse.
The development of the Internet will lead to a substantial increase in the input streams of data that will come from globally distributed sources. Transfer of the variety of these data in a single place for further processing virtually unrealizable due to technical and economic constraints. Recent trends centralizing applications in order to reduce costs and improve security are incompatible with the Internet.
Attempts to present the incoming streams of data in a uniform allocation of resources in conditions of new architectures pose in front of the data center staff are not easy tasks, generating a lot of questions related to the management of data. In business there is a need in the electoral information to store the data in best security as all the collected data is too expensive.
Despite the fact that we spend so much time discussing a variety of cloud services, in most cases we contact directly with the part of them, which it comes not so often. Ordinary people, ordinary computers, ordinary tasks. Yet the main purpose of the service, which can have a huge array of additional functionality – synchronization and file storage. This would seem elementary, basic tasks, but they are executed 90% of the time. And if they are not implemented as it should, then it cannot reach to the rest of the business.
Though it may seem that all modern services now handle such tasks satisfactorily, it is not always. Lost when you synchronize files, because the algorithms perform basic operations have been developed for a long time, but it still happens. Take at least iCloud, which combines closeness to the user with a bad API. Its users regularly face loss of files, especially when synchronizing large amounts of small files, which often presents a difficulty for other services. Even the popular service failures occur. Still it would be interesting to compare the speed of synchronization of data between different devices on different services.
In the end, synchronization – this is the main task of the service and you work on multiple computers with different operating systems, then you lack the speed and synchronization failures are the most important factors when choosing a service (other than the available free space, added functionality, and so on).
For comparison, I chose the most popular services – Dropbox, Google Drive, Microsoft Skydrive and Box.net. At various times I had to use all. Each has its own unique advantages – online documents, the ability of streaming music and video files and so on. But I was interested in just basic synchronization capabilities – how quickly synchronize files, whether lost, and what is the scatter in the results?
I ran some tests with all sorts of different types of files – large files, arrays of small files (look, for example, arrays of photos) heterogeneous sets. In the process of testing a very large role unpredictable factors, so each test was repeated 10 times. The difference between the measurements sometimes fit into the margins of error was sometimes very significant. During testing, I realized that, not only peak rate is important, which can show a particular service, but also the stability of the results. Simply put, it is not the one who is sometimes the fastest, but the one who never turns the slowest. Synchronization Sometimes the same set of files lasted 10 seconds, sometimes minute.
Cloud Data Storage Services
Dropbox - Old resident of cloud movement did not even think to give up – despite the miserable as by today’s standards, the amount of free space issued, Freemium model for this service is operated by a large user base and competently made the referral program. In addition, with simple operations, there are also trained users working with some advanced features of Dropbox, you can easily bring the volume to 5 GB standard. There are also paid plans, but they are by today’s standards, a little expensive. But the high cost is offset by the efficiency of Dropbox. The sync is faster than its competitors, in most cases, but more importantly, it has almost no failures in performance when the speed drops to almost zero (and LAN Sync feature allows faster synchronization is even greater). Cases of loss of files is not never seen. Everything just worked without any glitches on all computers, whatever OS on them has not been established.
Once again, Dropbox has users on all popular operating systems – OS X, Windows and different versions of Linux.
Also popular because this is the only service that offers the client under Linux and under different distributions.
Google Drive Service, actively promoted by Google, along with the rest of the package (no word on G+). Drive, however, is popular – not least because of the office suite, Docs, which has not disappeared and continues to supplement Drive. Although it does not edit the files in Word, yet its presence is a big plus. Another plus is the union Warehouse Drive, Gmail and Picasa, giving a total of 15 GB of free space, which is more than any of its competitors in the popular services (various auction offers not taken into account), although the actual amount available will depend on how actively you use the mentioned services. Also, use Drive for file sharing still less common than Dropbox. Drive worked slower than Dropbox, but the second place was occupied stable. Lost files was not apparently an error in synchronization that early users had bothered to remove. Drive a client for OS X and Windows. Client for Linux for several years now is in “limbo” state, and the client for Windows likely cannot wait due to the escalation of the conflict with Google and Microsoft.
Microsoft Skydrive promoted as an integrated solution, which should be the link for all Windows-based computers, this service is deeply integrated in the latest Windows 8 and Windows Mobile OS 7/8. In these OS Skydrive automatically saves not only the user data, but also, for example, the data for the system recovery in case of failure. That is iCloud, but more open and friendly to other platforms – there are apps for all mobile platforms and OS X. Free 7 GB available, but some people who have used Skydrive for a long time can get a free bonus and increase the volume up to 25 GB. Prices for extra space is very democratic. Skydrive In testing showed the third result – despite periodic “sprints” averages were lower than those of competitors. In addition, the service periodically “hang out” and stop synchronization. Restart the service could only be “killing” the process and run it manually – it is clear that a normal user would have to restart the computer.
Box.net – perhaps the only competitor to Dropbox number of services that do not belong to the giants of the IT industry, and thus having a major stake in the market. They divided the market niches – Dropbox mastered on consumer market while Box.net won corporate. Now both had outgrown the old framework and they encroach on the territory of a rival. Most recently, a desktop application for synchronization in Box.net offered only to paid subscribers, but the service is now open to all comers. Free standard space – 5GB, but much of the non-business user accesses the Box.net service because it generously distributed free place on a variety of actions associated with mobile devices. So you can get up to 50 GB of free space, but with restrictions. For example, the maximum upload file size is limited to 250 MB for free accounts. Clients are available for Windows,and OS X Unfortunately, the results are not too encouraging. Stable low speed, and besides loading periodically interrupted, and then scans the entire application again synced folder, which can take considerable time if the folder has a large volume.
Testing for all services give strange results when saving Word files (or any other) who were in the process of editing. Sometimes dubbed file, sometimes appeared more files with the same name, except that it always starts with “~ $”. Subsequently, these files disappeared. In general, the test results show why many users, including myself, still use the services of Dropbox despite the fact that competitors offer more space. In practice, to store the most important files you need is not so much space but reliability and speed come out on top. And here Dropbox is still an advantage.
Few Tips finally that apply, no matter how you use the service:
Using a dedicated server or VPS with cPanel?
We will see below how to use a dedicated server or VPS with cPanel and install in just 5 steps to ensure that it can effectively accommodate all your web sites, then you can determine how to use a dedicated server or VPS with cPanel and that fully list the initial installation of your operating system and configuration of network interfaces, you must still take steps before your cPanel dedicated server is ready.
Steps to configure Quad Core / Dual Hexa Core Dedicated Servers with cPanel
1. Complete the initial setup of cPanel server.
This should take place when you login for the first time, most people tend to browse only those options that are provided by default in many cases that’s fine, however doing so will find useful options which over time will make your work much easier.
Check that the hostname and the primary and secondary nameservers are correct, cPanel will try to configure most of these elements, based on the name of the host that are located in your initial configuration, then you must ensure that you have set up an email address Contact for your cPanel dedicated server, this will be useful to find out how to use a dedicated server or VPS with cPanel avoiding the inconvenience that may arise. Using a dedicated server or VPS with cPanel
2. Executes updating Apache.
This step is essential to set up a cPanel dedicated server, since there are the types of PHP scripts and commands you can use in your web page, learning how to use a dedicated or VPS with cPanel server is pretty easy, all you have to do is check or uncheck the boxes corresponding to the PHP version, you should not close the browser window while carrying out the process.
3. Prepares packages for your server accounts.
Carried out this work as a quick and easy way to ensure that your sites will not consume all the resources of cPanel dedicated server, you can do it from the “Package Manager” in WHM, and leave the resources available to other sites.
4. know the WHM Security Center.
This is carried out infrequently, however I realize it will allow you to maintain a higher level of security for your dedicated server cPanel login there through section go to “Security – Security Center”, usually dedicated servers bring disabled all your options, so you know them and review them that can help you to enable or disable various functions as you need it.
5. Sets backups of your dedicated server.
Developing a good backup is vital for everyone, be sure to create them on your own dedicated server, fortunately discover how to use a dedicated server or VPS with cPanel for this purpose is very easy, you just log into WHM and go to section “Backup – Configure Backup” and check the Enabled or Enable checkbox.
In the advisory opinion, the promotional discourse exaggerated around cloud computing has introduced a lot of confusion about the concept. IT departments need to be careful to avoid this ‘hype’ and, instead, should focus on deploying private cloud computing platform which make more sense for the company effort.
The same denounces five misconceptions detected in the IT industry:
Just have virtualization
Equip a server with a hypervisor is not the private cloud computing. Virtualization is a key component that allows the creation of a repository of resources to be easily accessed. But it must still ensure other self-service features and expandability features.
It serves always to save money
One of the biggest misconceptions is that IT experts believe that with cloud computing they will save money. It may be true in many cases, but this is not a universal truth. There is no guarantee of cost savings. Deploy automation technology to have a private cloud can be a significant investment for many IT budgets. They may gain the ability to redistribute resources more efficiently. And be able to reduce their capital expenditures in access to new hardware. But Gartner says that the main benefit is the increased agility and the ability to dynamically expanding.
Is always indoors
Many people associate a private cloud as that housed in the data center of an organization, behind a firewall. This is a half truth. And many manufacturers try to sell private cloud without features “multi-tenant”, with resources dedicated to a single client, and without sharing resources. A private cloud is not defined by the privacy afforded by the location or ownership or management or responsibility. Some vendors may, for example, outsource their data center operations from a client to a secondary facility, or pool resources of clients, but separate them using VPN.
Infrastructure is only a platform for private cloud that is often regarded as the virtual infrastructure service. But there are other private cloud deployments, particularly in software and in many other ways. IaaS segment may be the fastest-growing cloud computing market but it is not necessarily most important.
It is always private?
Adopting a private cloud platform is the first natural step for many organizations. But with the evolution of the cloud market, IT departments will better accept the idea of using public cloud providers resources. Levels of service and safety precautions should mature and impact of outages and downtime will be minimized.
Gartner predicts that most deployments of private cloud will be in hybrid clouds.
This article aims to give the reader a more integrated vision of what is to be the cluster computing and how it is growing every day on the world market, hopefully it will help in understanding the importance of this technology.
What is a Cluster ?
In its most basic form, a cluster is a system comprising two or more computers or systems (called nodes) which work together to execute applications or perform other tasks, so that users who use them, have the impression that only a single system responds to them, thus creating an illusion of a single resource (virtual machine). This concept is called transparency of the system. As key features for the construction of these platforms is included elevation : reliability, load balancing and performance.
Types of Clusters
High Availability (HA) and failover clusters, these models are built to provide an availability of services and resources in an uninterrupted manner through the use of implicit redundancy to the system. The general idea is that if a cluster node fail (failover), applications or services may be available in another node. These types are used to cluster data base of critical missions, mail, file and application servers.
Load balancing, this model distributes incoming traffic or requests for resources from nodes that run the same programs between machines that make up the cluster. All nodes are responsible to track orders. If a node fails, the requests are redistributed among the nodes available. This type of solution is usually used on farms of Web servers (web farm).
HA & Load Balancing Combination, as its name says, it combines the features of both types of cluster, thereby increasing the availability and scalability of services and resources. This type of cluster configuration is widely used in web, email, news, or ftp servers.
Distributed Processing and Parallel Processing, this cluster model improves the availability and performance for applications, particularly large computational tasks. A large computational task can be divided into smaller tasks that are distributed around the stations (nodes), like a massively parallel supercomputer. It is common to associate this type of Beowulf cluster at NASA project. These clusters are used for scientific computing or financial analysis, typical for tasks requiring high processing power.
Reasons to Use a Cluster
Clusters or combination of clusters are used when content is critical or when services have to be available and / or processed as quickly as possible. Internet Service Providers (ISPs) or e-commerce sites often require high availability and load balancing in a scalable manner. The parallel clusters are heavily involved in the film industry for rendering high quality graphics and animations, recalling that the Titanic was rendered within this platform in the Digital Domain laboratories. The Beowulf clusters are used in science, engineering and finance to work on projects of protein folding, fluid dynamics, neural networks, genetic analysis, statistics, economics, astrophysics among others. Researchers, organizations and companies are using clusters because they need to increase their scalability, resource management, availability or processing to supercomputing at an affordable price level.
High-Availability (HA) or Clusters Failover
The computers have a strong tendency to stop when you least expect, especially at a time when you need it most. It is rare to find an administrator who never received a phone call in the middle of the morning with the sad news that the system was down, and you have to go and fix the problem.
High Availability is linked directly to our growing dependence on computers, because now they have a critical role primarily in companies whose main functionality is exactly the offer of some computing service, such as e-business, news, web sites, databases, among others.
A High Availability Cluster aims to maintain the availability of services provided by a computer system by replicating servers and services through redundant hardware and software reconfiguration. Several computers acting together as one, each one monitoring the others and taking their services if any of them will fail. The complexity of the system must be software that should bother to monitor other machines on a network, know what services are running, those who are running, and what to do in case of a failure. Loss in performance or processing power are usually acceptable, the main goal is not to stop. There are some exceptions, like real-time and mission critical systems.
Fault tolerance is achieved through hardware like raid systems, supplies and redundant boards, fully connected network systems to provide alternative paths in the breaking of a link.
Cluster Load Balancing
The load balancing among servers is part of a comprehensive solution in an explosive and increasing use of network and Internet. Providing an increased network capacity, improving performance. A consistent load balancing is shown today as part of the entire Web Hosting and eCommerce project. But you cannot get stuck with the ideas that it is only for providers, we should take their features and bring into the enterprise this way of using technology to heed internal business customers.
The cluster systems based on load balancing integrate their nodes so that all requests from clients are distributed evenly across the nodes. The systems do not work together in a single process but redirecting requests independently as they arrive based on a scheduler and an algorithm.
This type of cluster is specially used by e-commerce and Internet service providers who need to resolve differences cargo from multiple input requests in real time.
Additionally, for a cluster to be scalable, must ensure that each server is fully utilized.
When we do load balancing between servers that have the same ability to respond to a client, we started having problems because one or more servers can respond to requests made and communication is impaired. So we put the element that will make balancing between servers and users, and configure it to do so, however we can put multiple servers on one side that, for the customers, they appear to be only one address. A classic example would be the Linux Virtual Server, or simply prepare a DNS load balancer. The element of balance will have an address, where customers try to make contact, called Virtual Server ( VS ), which redirects traffic to a server in the server pool. This element should be a software dedicated to doing all this management, or may be a network device that combines hardware performance and software to make the passage of the packages and load balancing in a single device.
We highlight some key points for an implementation in an environment of success with load balancing on the powerful dedicated servers:
The algorithm used for load balancing, taking into consideration how balancing between servers is done and when a client makes a request to the virtual address ( VS ), the whole process of choosing the server and the server response must occur transparent and imperceptible to the user mode as if no balancing.
Create a method to check if the servers are alive and working, vital if the communication is not redirected to a server that has just had a failure (keepalive).
A method used to make sure that a client accessing the same server when you want.
Load balancing is more than a simple redirect client traffic to other servers. For proper implementation, the equipment you will need to have balancing characteristics as permanent communication check, verification of servers and redundancy. All of these items are necessary to support the scalability of the volume of traffic on the networks without eventually become a bottleneck or single point of failure.
Algorithms for balancing is one of the most important factors in this context, then we will explain three basic methods :
This technique redirects the requests to the lowest based on the number of requests / server connections. For example, if server 1 is currently handling 50 requests / connections, and server 2 controls 25 requests / connections, the next request / connection will be automatically directed to the second server, since the server currently has fewer requests / connections active.
This method uses the technique of always direct requests to the next available server in a circular fashion. For example, incoming connections are directed to the server 1, server 2 and then finally server 3 and then the server 1 returns.
This technique directs the requests to the load based on the requests of each and the responsiveness of the same (performance) For example, if the servers server 1 is four times faster in servicing requests from the server 2, the administrator places a greater burden of work for the server 1 to server 2.
Combined Cluster High Availability and Load Balancing
This combined solution aims to provide a high performance solution combined with the possibility of not having critical stops. This combined cluster is a perfect solution for ISPs and network applications where continuity of operations is very critical.
Some features of this platform :
What is a Beowulf Cluster ?
One of the most remarkable technological advances of today, has been the growth of the computational performance of PCs (Personal Computers). The truth is that the PC market is larger than the market for workstations, allowing the decrease in price of a PC, while its performance increases substantially overlapping in many cases, the performance of dedicated workstations.
The Beowulf cluster was envisioned by its developers in order to meet the growing and high processing power in various scientific areas in order to build powerful and affordable computing systems. Of course the constant evolution of processor performance, and has collaborated in approach between PCs and Workstations, decreasing costs of network processors and own technologies and open free operating system like GNU / Linux much research to influence improvement of this new philosophy of high performance processing in clusters.
A key feature of a Beowulf cluster, the software is used, which is of high performance and complimentary on most of their tools, as an example we can mention GNU / Linux and FreeBSD operating systems which are installed on the various tools that enable processing parallel, as is the case of PVM and MPI API’s. This allow to make changes to the Linux operating system to provide it with new features that facilitated the implementation for parallel applications.
Works like Beowulf ?
The system is divided into a controller node called front-end (particularly I call the master node), whose function is to control the cluster, monitoring and distributing tasks, acts as a file server and runs the link between users and the cluster. Large clustered systems can deliver several file servers, the network node manages not to overwhelm the system. The other nodes are referred to as customers or backends (well I call slave nodes), and are solely dedicated to processing tasks sent by the controller node, and there is no need for keyboards and monitors, and possibly even without the use of hard drives (remote) boot, and can be accessed via remote login (telnet or ssh).
If you have a specific requirement to host your website then the Linux dedicated servers might be perfect for you. You can request your web hosting provider to provide customize hosting plan. This open source operating system can be changed into a good working server and also it is utilized for Red Hat Enterprise and Free BSD.
Now, Linux is a best open-source and most powerful operating system all over the world. This operating system is most reliable option in web hosting industry. There are many hosting providers who offer you Linux Dedicated Server Hosting to get the excellent service.
The users of unlimited dedicated servers can get many advantages. Some of them are discussed below :
A. Users get Low cost hosting : As we know that the linux is an open-source operating system. So, it is a very good option for those people who want to find out low-cost option for their dedicated servers. There is no license cost required for Linux OS. So, the hosting cost is automatically reduced with it.
B. Support for scripting languages: You have to know more about your requirements. If your website requires the scripting language like, PHP, MySQL, or Perl then you have to choose Linux OS for your hosting. Windows doesn’t permit these scripting languages. On the other hand if your website is developed by windows environment then you have to choose windows OS because Linux doesn’t permit that.
C. Users get better Scalability: There is a big amount of users who believe that they get better scalability with Linux hosting rather than windows hosting.
D. Users get better Security: Users get better security with Linux hosting rather than any other operating system based web hosting plan. So, You can go with Linux dedicated hosting for better security.
E. Users get more Reliability: You can get more reliability with Linux hosting rather than any other operating system based web hosting plan. So, choosing Linux dedicated hosting would be better option for you.
With the advent of dedicated streaming servers, listening to online music and watching videos are so much a fun, now we need not spend much of the time at buffering for listening to music or watching online videos.
A dedicated streaming server is much more than just the hard drive, a dedicated streaming server integrates the software that is required to deliver media through our internet connection. Many protocols are required to help the audio or video travel from one device to another.
How dedicated streaming server works?
In streaming servers the users visit a web page hosted on the web server and finds the files they like to hear or watch, the web servers send the request to a streaming server and then it sends the file to the user with a little help of a web server.
Dedicated streaming servers are able to deliver audio and video in real time, which is due to the types of protocols they use. RTP (Real-time transfer protocol) RTSP (Real-time streaming protocol). Now these protocols are acting as layers that look after the web traffic. As real-time protocols direct media streams to where they need to go, other web protocols are simultaneously working in the background. Overall, these protocols work together to balance the bandwidth load on a server.
Dedicated streaming servers are designed to handle a large number of traffic in the website without any downtime. A dedicated streaming server provides an enormous amount of network storage to avoid streaming latency. A dedicated server answers the need of streaming video as it provides massive amounts of bandwidth, 100%uptime and maximum amount of storage.
Dedicated servers provide a good amount of bandwidth, so that the servers are ready to supply Streaming of any size project. Dedicated streaming servers are fast, they can also increase the load time for the end user, and are very capable of handling higher volumes of traffic and site usage.
Top dedicated servers offer high bandwidth connections to the internet, to serve millions of users, dedicated streaming servers provide huge bandwidth, and dedicated traffic so that there is no stream. Without dedicated hardware that routes traffic to maximize throughput, the streams would fail or buffer. A dedicated staff that provides 24x7x365 supervision supervises dedicated streaming servers. In dedicated streaming servers all systems are optimized to take irregular loads, and can handle high bandwidth demands on them, so the uptime is nearly 100%.
The SAP-ERP – (System Application Product – Enterprise Resource Planning) uses a three-tier client-server architecture, consisting of:
With this structure, it is possible to distribute the tasks to other machines with increasing demands, thereby increasing the efficiency of the overall system.
The various SAP ERP Hosting components set a custom relational SQL – database ahead, which is not supplied by the company itself. SAP ERP and the SAP Web Application Server support in addition to the in-house SAP MaxDB – common products such as DB2, Informix, Microsoft SQL Server and Oracle.
The entire business processing is done in the application server through special programs utilizing the proprietary programming language ABAP / 4 ( Advanced Business Application Programming Language are written) and through tools such as Data Dictionary, mask generator to supplement or Query Management.
The programs are executed within a specific runtime environment, which is called SAP “kernel”.
The kernel is in C programming and can – in contrast to most of ABAP programs – will be viewed but not modified by the customer. The kernel abstracted from both the conditions of the operating systems used and of the special SQL – syntax of the used DBMS, so that ABAP programs on all platforms are capable of running for ABAP kernel.
The kernel contains the following essential components:
The processes can be distributed as needed depending on different machines. The simplest case (all processes running on an application server) is referred to as so-called “central instance”. For smaller scenarios, this arrangement is sufficient, and often can be maintained and the database on the same machine. Some components (especially locking and update processes) may exist only once per system, the “workhorses”, however (the dialogue and background processes), which perform the actual program execution can be distributed across multiple machines. The combination of database and application server processes is called ERP system.
Unlike most of the smaller ERP systems, numerous variations of the functionality can be set only by parameters in SAP ERP. The adjustments to these settings are as customizing designated and must be performed for each implementation of the system or module.
The plurality of parameters is controlled by means of several thousands of database tables, which are evaluated at run time. Your care is via a tree parameter, which is constructed similar to the application structure for modules and Care Masks and function information for the permissible entries offers.
Alternatively, this can also be done directly on the care of the control tables.
Are the settings through the existing customizing functions no longer represent the standard programs in a number of places extension points available, where via a defined interface customized program parts can be embedded in the standard processing ( User Exits, Customer Exits, Business Add-Ins (BAdI), enhancements).
If these options are not enough, (almost) all standard programs can be modified as per customer specifications. These “modifications” are recorded automatically in order to assign responsibility in case of errors can. Because of the increased sequence expense (balance during the update of the standard programs) modifications are, however, avoided if possible.
The interplay of the various parameters is only partially documented – adaptation to a company therefore needed on the part of the responsible consultant a certain experience.