🚀 Oracle Exadata Overview
Oracle Exadata is a super-fast, pre-built, engineered system (hardware + software together) designed to run Oracle databases much faster and more efficiently than a normal server.
🧩 Exadata Main Components
🖥️ Database Servers (Compute Nodes)
Run Oracle Database software
Handle query execution and database logic
📦 Storage Servers (Cell Servers)
✅Smart storage servers with their own CPUs and memory
✅ Process queries directly inside the storage
✅ Filter data before sending results to DB servers
✅ Reduce workload on the database layer
✅ Provide extremely fast I/O
✅This is called Smart Scan, one of Exadata’s biggest advantages.
⚡ High-Speed Network
RoCE/InfiniBand fabric ensures ultra-fast communication between servers and storage.
InfiniBand (IB) replaced with RoCE (RDMA over Converged Ethernet) Starting from X8M
The InfiniBand Subnet Manager (SM) is NOT used in the new RoCE.
RoCE Fabric Manager (RFM) => Management and monitoring software that controls the RoCE network fabric used in modern Exadata (X8M and later) systems
🌐 Management/Network Switches
Provide connectivity and manage traffic across the Exadata environment.
🛠️ Exadata Software
Specialized software stack that integrates and optimizes all components.
🗄️ Exadata Rack Types
🔧 Notes
olsnodes -n → List cluster nodes
asmcmd lsdg → Display ASM disk groups
cellcli -e "list flashcache" → Show flash cache details
./exachk → Run complete Exadata health check
Use AWR Reports to check Cell Offloading & Smart Scan Efficiency
🛠️ Exadata Tools
🔹 CellCLI
Runs locally on a storage cell to manage and monitor storage components:
Grid disks
Cell disks
Flash cache
Physical disks (NVMe, SSDs)
Cell services
Metrics and alerts
Examples:
cellcli -e "list physicaldisk"
cellcli -e "list cell detail"
🔹 DCLI (Distributed Command Line Interface)
Python-based tool for cluster-wide management:
Parallel Execution: Run commands across multiple servers simultaneously
Centralized Management: Manage the entire cluster from one location
SSH Integration: Requires passwordless SSH connectivity
Automation: Ideal for routine tasks like installs or checks
Targeting: Specify servers via node lists or group files
Examples:
dcli -g all_cells -l root "cellcli -e list cell"
dcli -l oracle -g /tmp/a.txt "ps -ef |grep pmon"
🔹 ExaCLI
Remote administration tool for Exadata:
Manage and monitor storage cells and DB nodes without SSH access
Essential for Exadata Cloud and Exadata Cloud@Customer
Provides visibility into storage-level metrics and objects
🧑💻 Key Software Features
🔹 Smart Scan
Controlled by CELL_OFFLOAD_PROCESSING (default: TRUE)
Offloads SQL processing to storage servers
Activated during full table scans or index fast full scans
Look for "TABLE ACCESS STORAGE FULL" or "STORAGE INDEX" in SQL plans
Limitation: Doesn’t work if data is on NFS, SAN, or local disk
⚡ Flash Cache Modes
Exadata flash can operate in two modes for handling writes:
Write-through → Write to disk first (safe, slower). Default in older Exadata.
Write-back → Write to flash first (fast, modern). Default in Exadata X5 and later.
Commands:
cellcli -e "LIST CELL DETAIL" | grep flashCacheMode
cellcli -e "ALTER CELL flashCacheMode=WriteBack"
cellcli -e "ALTER CELL flashCacheMode=WriteThrough"
🔹 Storage Index
In-memory metadata structure on each storage cell
Maintains min/max values of columns to eliminate unnecessary I/O
Stored only in memory (lost on reboot, rebuilt as queries run)
Storage region = 1 MB chunk of data
🔹 Exadata Hybrid Columnar Compression (EHCC)
Compression modes:
Examples:
Create table example compress for query high as select * from example2;
Alter table example compress for query high;
Alter table example move nocompress;
Organizes data into Compression Units (CUs)
Ideal for Data Warehousing / Archival
Not recommended for OLTP workloads
📊 Exadata Calibrate
Acts like a storage speed test
Command: DBMS_RESOURCE_MANAGER.CALIBRATE_IO
Measures raw disk/flash performance (IOPS, throughput)
⭐ ILOM (Integrated Lights Out Manager)
A dedicated hardware management controller ( tiny computer on the motherboard ) that:
Runs even when the OS is down or the server is powered off (but plugged in).
Lets you manage the hardware remotely (power, console, sensors, firmware, etc.).
Works independently of Linux/Oracle Linux and the database.
Each database server and each storage cell has its own ILOM management interface, usually on a separate management network.
To get all the ILOM IP’s, run below
dcli -g dbs_group -l root "ipmitool lan print | grep -i 'IP Address'"
dcli -g cells_group -l root "ipmitool lan print | grep -i 'IP Address'"
To login , from the browser use IP and then user/pass
https://<ilom-ip>/
ssh root@<ilom-ip>
EX: If your rack has:
3 database servers
6 storage cells
You have 9 independent ILOM interfaces, each with its own IP, credentials, logs, and sensors.
cellinit.ora & cellip.ora
To enable a database server to communicate with Exadata storage cells, two configuration files play a critical role: cellinit.ora and cellip.ora. Both files are created automatically during the Exadata deployment process and are typically located in: /etc/oracle/cell/network-config These files work together to establish identity and connectivity between the database servers and the storage cells: cellinit.ora → Defines the cell’s identity and initialization parameters cellip.ora → Defines how the database servers communicate with the cells✅ cellip.ora
The cellip.ora file tells the DB/ASM cluster where the Exadata storage cells are located and how to reach them over the storage network. Key Characteristics Must be identical across all database nodes in the cluster. Contains the InfiniBand or RDMA storage network IP addresses of all storage cells. Required for ASM to discover grid disks and for Exadata features like Smart Scan. When Adding a New Cell Append the new cell’s storage network IP to this file. Always take a backup before modifying it. Example cat cellip.ora 192.168.10.11 192.168.10.12 192.168.10.13 192.168.10.17 # new cell✅ cellinit.ora
The cellinit.ora file serves as an initialization parameter file for Exadata cell services on the database server side. Key Characteristics Defines network identity parameters used by cell services. Each database server node has its own cellinit.ora. Default location may vary slightly depending on Exadata version. Example cat cellinit.ora ipaddress1=192.168.41.111/21 ipaddress2=192.168.41.112/21PMEM & XRMEM
PMEM (Persistent Memory)
PMEM was Oracle’s first major leap into memory‑class storage acceleration. Built on Intel Optane Persistent Memory, it was available only on Exadata X8M and X9M storage servers.
✅ What PMEM Provided
Ultra‑low‑latency caching for hot data
Accelerated commit operations via PMEM Log
Direct RDMA reads from the PMEM Cache, bypassing traditional I/O paths
✅ PMEM Components
PMEM Cache – a high‑speed caching tier above flashBefore PMEM existed, Exadata relied on Smart Flash Log for commit acceleration. PMEM dramatically reduced latency beyond what flash could achieve.
XRMEM (Exadata RDMA Memory)
With the introduction of Exadata System Software 23.1 and the X10M platform, Oracle unveiled XRMEM, a next‑generation memory acceleration layer.
✅ Why XRMEM Was Introduced
- Intel discontinued Optane PMEM hardware
- Exadata X10M moved to AMD CPUs, which do not support PMEM
- Oracle needed a hardware‑independent way to preserve PMEM‑level performance
XRMEM delivers the same architectural benefits as PMEM—but without requiring persistent memory hardware.
✅ XRMEM Components
XRMEM Cache – replaces PMEM CacheIn short: XRMEM is the evolution of PMEM—same benefits, broader hardware compatibility.
Summary: PMEM vs XRMEM
| Feature | PMEM (X8M/X9M) | XRMEM (23.1 / X10M+) |
|---|---|---|
| Hardware | Intel Optane PMEM DIMMs | No special hardware required |
| RDMA Reads | ✅ Yes | ✅ Yes |
| Cache Layer | PMEM Cache | XRMEM Cache |
| Commit Accelerator | PMEM Log | XRMEM Log |
| Availability | Only X8M/X9M | X10M and future systems |
Learn More
https://www.oracle.com/database/technologies/exadata/software/pmemaccelerators/