What is Exadata

The Birth -Project Sage

Exadata had it's origins in project SAGE which aimed to remove the major bottleneck of databases namely I/O. the Full form of SAGE was Storage Appliance for Grid Environments and it was originally envisaged as an intelligent storage that could be connected to different database servers in order to accelerate performance. Quite quickly it became understood that in order to get the requisite acceleration not only was the storage layer standardisation necessary but also to storage to db connectivity and the RAC interconnect needed standardisation. So it had to be an appliance.

The CPU and memory of modern computer systems run in speeds of thousands of Gigahertz. Disk and disk connectivity and even flash has had a tough time keeping pace. In a traditional database system data has to be brought into the main memory in or to be winnowed and selected on. This is wasteful as all DB manufacturers understood especially if the data selected for was much smaller than the total data set.



The solution was simple reduce the data that needs to be pulled from storage.


But how ?



All Database appliances work on this by storing information about the data in the storage layer. this immediately requires a different kind of storage. In systems like Netezza this is done using custom chips FPGAs and s-blades which leads to costly storage. In Exadata the idea was to go with commodity intel servers and build the smarts into these storage servers running standardised hardened Linux configurations. The main memory of these servers stores metadata about the data kept in the disks/flash on the storage servers which helps to drastically reduce the unneeded data that needs to be returned to the db nodes.

The use of commodity hardware to accomplish this meant that not only did Oracle gain the advantages of Moore's law they also got the ability to quickly evolve their storage software which was java based and hence easily maintained compared to their competition . In fact some of the major innovations in Exadata come in software like Hybrid columnar compression that could evolve on the fly from an incompatible format to the same format as DBIM leading to extension of Datasheet in memory to the flash cache of the Exadata Storage servers .


Exadata originally came with the concept of golden ratio's of Cpu cores of DB nodes to storage cores and storage tb/flash. However over time it became clear that these Golden ratios were too rigid for all use cases. Though the concepts of an Eighth Rack, Quarter Rack , Half Rack and Full Rack still remain to this day you are also free to increase a base configuration in increments on a storage server or a database node at a time.


All such configurations are known as Elastic configurations and though it was introduced with the X5 generation it is backwardly compatible other configurations based on the OECA configuration assisstant.


Explanation circa 2013

Exadata really was killing it on the price performance comparisons with IBM . The big differentiation was the storage costs.Intelligent storage was not only cheaper due to the use of commodity components rather than custom chips it also allowed use of standard software components like OC4J java containers and later weblogic java servers to run in the storage servers and help in hosting the storage smarts. So now the storage layer could evolve on the fly rather than being stuck in firmware.