A server in its most simple form is just a PC running software that is responsible for coordinating some form of communication between nodes on a network. There are four requirements for a server:
Computer Hardware
Operating System (OS)
Server Software
Connections between the devices on the network.
The hardware can be as simple as a standard desktop PC or as complex as a blade server rack mounted in a large server farm. The minimum requirement for the OS is that it must support networking. This may be accomplished by using Windows XP, or it may be a more complex OS that was specifically designed for networking, like Windows 2008 Server or some versions of Linux. It must have a software program running that "serves" something. The final requirement is a connection to the devices that are to use the services provided by the server. This may be done through wired, or wireless, connections.
Many people mistakenly believe that a server is no different from a typical desktop computer. This couldn’t be further from the truth. While almost any computer that meets the minimum hardware requirements can run a server operating system that alone does not make a desktop computer a true server. Even if the desktop computer had similar processor speeds, memory and storage capacity compared to a server, it still isn’t a replacement for a real server. The technologies behind them are engineered for different purposes.
A desktop computer system typically runs a user-friendly operating system and desktop applications to facilitate desktop-oriented tasks. In contrast, a server manages all network resources. Servers are often dedicated (meaning it performs no other task besides server tasks). Because a server is engineered to manage, store, send and process data 24-hours a day it has to be more reliable than a desktop computer and offers a variety of features and hardware not typically used in the average desktop computer.
Server
A server is a software service running on a dedicated computer and the service provided by this can be obtained by other computers in the network. Sometimes the physical computer that runs this service is also referred to as the server. Mainly servers provide a dedicated functionality such as web servers serving web pages, print servers providing printing functionalities, and database servers providing database functionalities including storage and management of data. Even though a personal computer or a laptop can work as a server, a dedicated server contains special features that would allow it to efficiently satisfy incoming requests. Therefore, dedicated servers normally include faster CPUs, large high performing RAM (Random Access Memory) and large storage devices such as multiple hard drives. Furthermore, servers use operating systems (OS) that are server oriented providing special features suitable for the server environments. In these OS, GUI is an optional feature and provides advanced back up facilities and tight system security features.
Desktop
A desktop is a computer intended for personal use and it is typically kept in a single place. Furthermore, desktop refers to a computer that is laid horizontally on the desk unlike towers. Early desktop computers were very large and they took up the space in a whole room. It was only in 1970s the first computers that could be kept on desk arrived. Widely used OS today in desktops are Windows, Mac OS X, and Linux. While Windows and Linux could be used with any desktop, Mac OS X has some restrictions.
From Physical Servers to Virtualization
The start of the mass migration to the cloud was largely initiated by the inception of virtualization, which has made moving to the cloud more easily achievable, according to DataCenter Knowledge. Essentially, we began with physical servers, which required racking and stacking the physical box and deploying an operating system onto it—after which we would layer on specific application software to perform the desired task on that system. Then came virtualization, using those same servers, but rather than installing a single operating system and running a single workload on that one box, we installed a hypervisor OS and set it up to support multiple virtual machines or virtualized servers that could run many different workloads all at the same time on that one physical box. This enabled much better capacity utilization, and provided a much easier way to instantiate new workloads.
While on-premise data centers are still integral to the IT strategies of many enterprises, the majority of businesses are beginning to migrate systems infrastructure from physical data centers to the cloud.
94% of IT professionals say that cloud and hybrid IT are among the top five most important technologies in their technology strategy (SolarWinds IT Trends Report 2018).
95% of IT professionals have migrated some part of their infrastructure to the cloud from 2016-2017.
Organizations have migrated applications, storage and databases to the cloud more than any other area of IT in the past from 2016-2017 (SolarWinds IT Trends Report 2017).
Keeping IT entirely on premise introduces the age-old challenge of resource, cost and management restrictions, which the scalability of cloud platforms mitigate.
…and Cloud to Containers
Container technology is a natural next step in virtualization. Containers are designed to provide a much lighter weight compute environment on which to run the parts of an application. They are much faster to startup than a virtual machine, don't require a full blown operating system or its maintenance, and provide portability across platforms (on premise, cloud, etc.).
Containers have generated a significant amount of hype in the past few years, and today, actual usage is on the rise. IT organizations are making investments in containers to solve the challenges commonly associated with cloud computing, both on and off premise, as well as to enable innovation.
44% of IT professionals rank containers and container orchestration as the most important technology priority today.
38% of IT professionals rank containers and container orchestration as the most important technology priority three to five years from now.
Containers effectively give you a similar functionality to a virtual machine, but in a much lighter weight fashion. The big difference between virtual machines and containers is that the container doesn’t have to maintain all the baggage of the entire operating system that a virtual machine has to maintain. Consequently, rather than burden the container with overhead, they run on a container manager and share the same operating system resources, designed to run on a hardware box or virtual machine and orchestrate stand up and tear down.
As far as orchestration solutions are concerned, Kubernetes and Docker Swarm are the most popular choices. Each container can start up quickly, and the software that runs on them is usually designed to do one task, or multiple threads of one task, ultimately resulting in a highly scalable implementation. This allows you to split up your app into component parts that can run on different containers, while various independent parts communicate with each other from container to container. This concept, coupled with the agile development methodology, has become the new norm for software development, and it’s very much complementary with cloud.
It’s worth noting, while containers offer a number of benefits, their use does result in loss of ability to provide discrete networking. Using a container manager with multiple containers may results in less secure networking between container workloads. One solution to this is offered by VMware, which has an integration with its NSX product that provides a networking and security layer around each container. While its necessity depends on the workload and what kind of control is desired between data flows and apps, it’s available for those who desire greater control of networking and security with containers.
What About Serverless?
Then there’s serverless technology, often mentioned in discussions of containers. Of course, it’s not actually serverless. Rather, from the perspective of the consumer of a public cloud environment, they aren’t responsible for its management (thus, serverless). As a developer or operator, you’re enabled to focus exclusively on your application, while the cloud provider will give you an environment in which you can deploy your application without concern for virtual machines, operating systems or container managers. You can focus on your app while your chosen cloud provider will supply you with a container runtime that they maintain for you.
In many cases, it’s more economical to run serverless technology because while you’re paying for some level of compute, not paying for individual instances or virtual machines, which is how cloud providers traditionally charge customers for usage. Serverless environments require less overhead and management. However, if you need more control over the OS you’re running or the capabilities that exist on the container, serverless may not necessarily be a viable option for you. Some examples of serverless cloud solutions would be Google’s App Engine or Amazon’s Elastic Beanstalk. These are also referred to as Platform as a Service (PaaS) and on premise examples include Pivotal Cloud Foundry and Red Hat OpenShift. While the on premise PaaS offerings aren’t really serverless, they do offer similar functionality where the software developer doesn’t need to be concerned with operating systems and patching. Instead they can focus all of their energy on designing and building their software solution or application.