What are data centers, what are they for and how are they made

If, ironically, you were to decide to build a data center from scratch, you would have the burden of thinking of a structure capable of hosting all the IT infrastructure necessary to run applications, manage data and provide digital services in a continuous, secure and efficient manner. This is because a data center is a building where there is space for servers, storage systems, network equipment and support infrastructures that make the system complex and functional. Its primary function is to centralize the IT resources of a company or a cloud provider, making it possible to reduce management costs, increase operational resilience and guarantee continuity in the event of failures or emergencies. Internally, computation is entrusted to servers of various types – from rack servers to blades and mainframes – while storage can make use of local solutions (DAS), network-connected (NAS) or organized in more complex systems such as SANs. The network, made up of switches, routers and high-speed cables, ensures communication between servers and users, while redundant power systems, such as uninterruptible power supplies (UPS), backup generators and cooling systems, keep operations constant over time.

Managing a data center also requires advanced physical and cybersecurity strategies, environmental controls over temperature, humidity and static electricity, and centralized monitoring tools. The design must comply with redundancy and reliability standards defined by international bodies that establish increasing levels of fault tolerance and intervention capacity during maintenance. Let’s delve deeper into everything by taking a closer look at how a data center is made.

Inside a data center
  • 1How a data center is made
    • 1.1 Server
    • 1.2Storage
    • 1.3 Power and cooling
    • 1.4 Virtualization
  • 2The standards to be respected when designing a data center

The essential components of a data center

To build a data center from scratch, the first thing to consider is the physical space and arrangement of the equipment, which are multiple. Let’s delve deeper into this aspect by looking a little more closely at how a data center is made.

Server

Servers, powerful computers that represent the heart of computing in a data center, can be of various types.

  • Server racks, which are wide and flat (like pizza boxes) are stacked on top of each other in racks (i.e. in modular structures that serve to organize and protect electronic and IT components), and each of these servers is equipped with network ports, power and ventilation systems.
  • Blade servers, on the other hand, allow you to concentrate multiple units in the same chassis, saving even more space than rack servers and reducing energy consumption.
  • In cases of extremely intensive workloads, mainframes offer superior processing power: just think that they can process billions of calculations and transactions in real time, in fact, supporting the workload of an entire room of servers mounted on racks or blade servers.

Storage and network infrastructures

Storage, i.e. archive infrastructure, can be local to each server via DAS (Direct-Attached Storage), distributed on NAS (Network-Attached Storage), which allows shared access to files or SAN type (Storage Area Network), block storage networks capable of managing large amounts of data in a centralized manner. The internal network infrastructure, taking advantage of a large quantity of network devices (e.g. cables, switches, routers and firewalls), connects each server, storage device and support equipment, ensuring fast transfers both within the data center (between server and storage, the so-called “east/west traffic”) and towards end users or other company offices (therefore between server and client, transfer also known as “north/south traffic”).

For hyperscale installations, the required bandwidth can range from tens of gigabits up to several terabits per second. For the record, hyperscale data centers are larger than “traditional” data centers, as they can occupy an area of ​​thousands of square meters and host something like 5,000 servers. The largest cloud data centers are operated by large cloud service providers, such as the well-known AWS (Amazon Web Services), Google Cloud Platform, IBM Cloud and Microsoft Azure.

Power and cooling

A critical element in any self-respecting data center is power and cooling management. Each data center must always be operational, and therefore equipped with dual power supplies, uninterruptible power supplies or UPS to protect against surges or short interruptions and reserve generators for prolonged blackouts. Redundancy in a data center is fundamental: duplicate or multiple components, RAID storage systems, backup cooling systems and, for large organizations, data centers in separate geographical regions, allow operations to be maintained even in the event of failures or natural disasters affecting certain geographical areas. Cooling systems maintain server temperatures in optimal ranges through air systems or CRAC (Computer Room Air Conditioning) or through liquid cooling systems, which are increasingly widespread due to energy efficiency. Humidity and static electricity are monitored to prevent damage, and fire and physical security systems protect critical assets.

Virtualization

Architecturally, modern data centers leverage virtualization, separating software from hardware and allowing CPU, storage, and networking to be bundled into flexible, programmable resources. This mode allows you to implement software-defined infrastructures (SDI) or entirely software-defined (SDDC), optimizing costs and performance, rapidly deploying services and scaling the infrastructure without the need to physically intervene on the hardware. Cloud models, both private and public, offer infrastructure, platform or software as a service (IaaS, PaaS, SaaS), while edge data centers (EDC) bring applications closer to users, reducing latency and improving the performance of AI, big data and streaming content. The combination of virtualization, SDI and intelligent management allows you to better utilize available resources, rapidly deploy applications, scale as needed, and support cloud-native application development.

The standards to be respected when designing a data center

The design of a data center must comply with international standards of redundancy and reliability. THE’Uptime Institutefor example, defines these standards in four levels:

  1. Tier I: A Tier I data center provides core components with redundancy capabilities, such as uninterruptible power supplies (UPS) and 24/7 cooling, so you can support IT operations in an office environment and beyond. Tier I data centers have a maximum annual downtime of 29 hours.
  2. Level II: in addition to what is already provided for in level I data centers, here additional redundant power and cooling subsystems are provided, such as generators and energy storage devices, so as to offer greater security against interruptions. At this level there is a maximum annual downtime of 22 hours.
  3. Level III: As you can imagine, going up to this level offers even more efficiency and even longer uptime, with annual downtime dropping to 1.6 hours per year. A similar result is guaranteed by the presence of a greater number of redundant components. Additionally, Tier III data centers do not need to be shut down for maintenance or component replacement.
  4. Level IV: With only 26 minutes of downtime per year, this level offers near-continuous data center uptime. In this level we have total fault tolerance due to several redundant capacity components that are independent and physically isolated, so that the failure of one piece of equipment has no impact on IT operations.