In order to use our site properly, you should allow javascripts to run in your browser, thank you.

What You Need to Know About Scaling HANA Systems

When organisations begin the implementation of SAP HANA systems, there seems to be several traps that people fall into, without intending to. The complexity of the HANA architecture has increased dramatically and ensuring you have the right systems to support it can be something of a minefield.

Through our vast experience in implementation and maintenance of these systems, we have developed some global best practices, which I briefly detail below. They show just how complex this process can be, and why many organisations are looking at HANA as a managed service, rather than implement it on-premise.

SAP systems traditionally held to the disk-based database architecture. This architecture required that data transferred from disk through memory and back again, making data access quite a cumbersome process. In some cases, demand on a database could take hours.  This problem led SAP to make some significant changes to both the application and the database layer, with the development of SAP HANA. One of the key features of HANA is that data is stored to memory, which enabled much faster processing and returns to queries.

SAP systems built on classic databases did not really impact on resources, because clients could simply add new disk space and as long as the load on the system did not increase there would be no significant increase in processor or memory needs. This meant that companies that invested with the expected capacity requirements of 3 to 5 years, were not usually faced with bottlenecks when it came to SAP or their databases.

With HANA using the in-memory architecture, entire databases are designed and installed on memory.  The increase in the size of database is not limited to an increase in disk space, but now also needs an increase in memory. Memory is not as easily and cheaply available as disk space, so ensuring the right capacity for scaling HANA infrastructures with a 3 to 5-year view could become more complex.

Live HANA servers need to operate on SAP certified ready-to-use hardware (appliance) or servers known as TDI. In the TDI model the client or a vendor certified business partner is needed to ensure that the operating system and databases comply with the requirements to run the SAP systems. You can find the list of certified servers at the end of this post. The amount of memory provided in these certified servers depends on the processors used. In fact, the memory requirements may even differ depending on the SAP applications which are intended for the server (BW / SoH, S4H) even with the same processors. Most 2-socket servers will only support 3 TB of memory, and in cases where more memory is needed, we recommend that customers look at a 4-socket supported server.

When investing in hardware, customers need to take potential processor/memory ratio limitations on the server side into account and select servers according to their maximum memory support. Also, to prevent unnecessary future costs clients need to consider their potential capacity requirements with a 3 to 5-year view estimating growth over that period. However, there may be a case where the SAP system grows faster than estimated at the start of the project, and customers may have to increase resources before the projected timeline.

Here are some of the reasons that generally contribute to an early increase in resources. Incorrect scaling calculations, the unplanned inclusion of new or differing business processes which are generally inspired by the value added when using HANA systems. In these cases, there may be a need to increase the memory capacity, which could lead to the need to increase capacity of other resources within the server. There may even be a case where the memory requirement outstrips the available slots, necessitating a new server with larger capacity.

 

This is why companies have started to select service providers such as GlassHouse Cloud that specialises in SAP and HANA and that can provide HANA Enterprise Cloud (HEC) or hybrid cloud infrastructure instead of on-premise data centres in their HANA projects. I will write about the differences between on-premise and cloud options of SAP projects in detail in a future post.

HANA databases allowed SAP environments that can range from very small-scale memory configurations of 128GB to SAP environments that might need 32TB or even higher memory capacity, and since the implementation of such high capacities can be difficult and costly as I have mentioned before, SAP started to support both scale-up and scale-out architectures. These architectures can be summarized as:

 Scale-Up

HANA database operates on a single physical server and the database capacity is limited by the maximum processor/memory the physical server supports and the maximum capacity certified by SAP. Since some vendors do not carry 8TB or higher capacity systems, implementing or scaling up through this architecture is less frequently observed in large scale systems.

 Scale-Out

This is the architecture in which HANA database services are implemented on multiple servers. For example, five 3TB servers can be used instead of a single 15TB server to obtain the same HANA capacity.

Though the scale-out solution seems more fitting at first glance, it might cause more workload during installation and management. Additionally, until recently this architecture only supported specific products such as SAP BW, SAP CAR, however scale-out infrastructure support for S/4HANA was announced a short while ago. I will be sharing another post that details the design and installation steps of operating S/4HANA systems on a scale-out architecture.

Which SAP products should a scale-out architecture be considered for?

HANA databases in BW and CAR systems grow exponentially and if we cannot estimate the capacity requirements for the 3 to 5-year mark, customers should consider scale-out. For a system that grows 3TB each year and is estimated to be 15TB in 5 years, customers may prefer purchasing four 1TB servers and increasing the capacity with 1TB new servers instead of purchasing a single 15TB server in the first year, reducing the initial investment costs.

As scale-out support in S/4HANA systems is new there are some limitations with this architecture. Currently at most 4 active nodes are supported and the maximum capacity of each node is at least 8 Core 6TB memory. At this point in time, the number of companies that may need a scale-out architecture are few at this time, as S/4 HANA systems with data sets this large have not yet been established.  Volumes like that are only recently being seen in some large companies. Considering that these limitations will be removed, and databases will grow larger in time, I estimate S/4HANA scale-out projects to be much more prevalent in a few years.

Links:

Certified and Supported SAP HANA Hardware
SAP S/4HANA – Multi-Node Support

Rudolph Visagie
GlassHouse South Africa, Solutions Architect

Blog Posts

Case Studies