The S/4HANA 2023 Management Cockpit is a web-based dashboard for monitoring and operating the SAP S/4HANA IDES demonstration environment hosted on sapidess4.fivetran-internal-sales.com.
From a single page you can:
https://sapidesecc8.fivetran-internal-sales.com/sap_skills/docs/SAP_S4HANA_2023.html
The Server Details card displays static and dynamic information about the SAP server:
| Field | Description |
|---|---|
| Hostname | Server hostname and FQDN. Includes an SSH button to open a terminal session. |
| SAP System | SAP product version (S/4HANA 2023 IDES) |
| SID / Instance Nr | System ID S4H and instance number 03 |
| Client | SAP client 100 (used for all operations) |
| Database | HANA version, retrieved live from the server |
| Tenant | Active HANA database tenant name |
| OS | Operating system version |
| IP Addresses | Private (VPC) and public IP addresses |
| CPUs | Number of virtual CPUs, retrieved live from the server via nproc |
| Memory | Total RAM in GB, retrieved live from the server via free -g |
Click the Refresh button at the top right of the section to re-query the HANA version, OS, and tenant name from the live server.
The Status section contains three cards that show the real-time state of the system components. Click Check Status to query all three at once.
| Card | What it monitors | Instance |
|---|---|---|
| SAP Application Server | SAP NetWeaver ABAP stack (ASCS + dialog instance) | SID S4H, instance 03, user s4hadm |
| SAP HANA Database | Primary HANA instance for S/4HANA data | SID FIV, instance 00, user fivadm |
| HANA Cockpit Database | Separate HANA instance for DBA Cockpit & Web IDE | SID PIT, instance 96, user pitadm |
Each card displays a colored dot and status text:
| Indicator | Meaning |
|---|---|
| Green dot + Active | The service is running and responding to queries |
| Red dot + Inactive | The service is down or unreachable |
| Grey dot + Unknown | Status has not been checked yet (page just loaded, waiting for response) |
For HANA cards, the status also shows the tenant name and the last successful backup date (e.g., "April 09, 2026").
HANA backups use Google Cloud's SAP Agent Backint — not disk-based snapshots. The Backint agent streams data directly from HANA to a Google Cloud Storage bucket.
| Component | Path / Value |
|---|---|
| Backint binary | /usr/bin/google_cloud_sap_agent backint |
| Wrapper script | /hana/shared/FIV/global/hdb/opt/backint/backint-gcs/backint |
| GCS Bucket | gs://sap-hana-backint/ (GCP project: internal-sales) |
| Bucket Console | https://console.cloud.google.com/storage/browser/sap-hana-backint |
The FIV tenant holds all S/4HANA application data. Backups are scheduled automatically and can also be triggered manually.
| Setting | Value |
|---|---|
| Backint parameters file | /hana/shared/FIV/global/hdb/opt/backint/backint-gcs/parameters.json |
| Bucket | sap-hana-backint |
| Compression | true (algorithm: lz4) |
| Service account key | /usr/sap/FIV/home/internal-sales-4b50698e74ec.json |
| Log to cloud | true |
| Parallel backint channels | 4 |
| Backint logs | /hana/shared/FIV/HDB00/sapidess4/trace/backint.log |
| Data backup destination | gs://sap-hana-backint/FIV/ |
FIV data backups are scheduled by the HANA Cockpit:
BACKUP DATA USING BACKINTThe PIT instance hosts the HANA Cockpit and Web IDE. Its backups are triggered manually.
| Setting | Value |
|---|---|
| Backint parameters file | /hana/shared/PIT/global/hdb/opt/backint/backint-gcs/parameters.json |
| Bucket | sap-hana-backint |
| Compression | true |
| Service account key | /usr/sap/PIT/home/internal-sales-4b50698e74ec.json |
| Data backup destination | gs://sap-hana-backint/PIT/ |
Backups are triggered manually via the cockpit page Backup Now button, which runs BACKUP DATA USING BACKINT on SYSTEMDB port 39613.
The following settings in global.ini control the Backint integration for the FIV tenant:
| Parameter | Value |
|---|---|
catalog_backup_using_backint | true |
data_backup_parameter_file | /usr/sap/FIV/SYS/global/hdb/opt/backint/backint-gcs/parameters.json |
log_backup_using_backint | true |
log_backup_interval_mode | immediate |
parallel_data_backup_backint_channels | 4 |
data_backup_compression_algorithm | lz4 |
The gs://sap-hana-backint/ bucket contains backups for all SAP systems:
| Path | Contents |
|---|---|
gs://sap-hana-backint/FIV/ | FIV tenant data + log backups |
gs://sap-hana-backint/PIT/ | PIT / SYSTEMDB backups |
gs://sap-hana-backint/SAP_ON_ORACLE_BACKUP/ | ECC Oracle brbackup copies |
gs://sap-hana-backint/sapidesecc8_webserver/ | Web portal backups |
Additional directories in the bucket store kernel packages, host agent binaries, and archive files.
Both HANA cards (FIV and PIT) include backup controls:
| Button | Action |
|---|---|
| Backup Now | Triggers an immediate BACKUP DATA USING BACKINT to Google Cloud Storage |
| Refresh | Checks whether a backup is currently running and updates the timer |
Backup running: 02:34).sap-hana-backint. View backups in the Links section under "Backint Backup Destination".
SAP HANA on this system uses Google Cloud Agent for SAP (google_cloud_sap_agent backint) to stream data, log, and catalog backups directly to the GCS bucket gs://sap-hana-backint/. Backups are gzip-compressed by the agent. Log and catalog backups run automatically every minute; full data backups are scheduled weekly (Saturday 09:33 America/Los_Angeles).
hdbsql → BACKUP DATA USING BACKINT ('...')
→ HANA calls hdbbackint (symlink)
→ backint wrapper (bash: google_cloud_sap_agent backint "$@")
→ reads parameters.json (bucket, SA key, compress)
→ uploads to gs://sap-hana-backint/<TENANT>/...
| File | Path |
|---|---|
| global.ini (custom) | /usr/sap/FIV/SYS/global/hdb/custom/config/global.ini |
| parameters.json | /usr/sap/FIV/SYS/global/hdb/opt/backint/backint-gcs/parameters.json |
| backint wrapper | /usr/sap/FIV/SYS/global/hdb/opt/backint/backint-gcs/backint |
| hdbbackint symlink | /usr/sap/FIV/SYS/global/hdb/opt/hdbbackint → wrapper |
| SA key | /usr/sap/FIV/home/internal-sales-4b50698e74ec.json |
| Local basepath | /SUMHANA/backup (NFS from saphvrhub) |
| Agent binary | /usr/bin/google_cloud_sap_agent |
| Agent logs | /var/log/google-cloud-sap-agent/backint.log |
| Agent config | /etc/google-cloud-sap-agent/configuration.json |
| systemd service | google-cloud-sap-agent.service |
Bucket: gs://sap-hana-backint/ (shared across all HANA systems, tenant-prefixed)
| Prefix | Tenant | Notes |
|---|---|---|
FIV/ | FIV (port 30015, SAPHANADB) | Primary S/4HANA backups |
PIT/ | PIT (port 39613, SYSTEM) | HANA Cockpit DBA tenant |
{
"bucket": "sap-hana-backint",
"compress": true,
"service_account_key": "/usr/sap/FIV/home/internal-sales-4b50698e74ec.json",
"log_to_cloud": true
}
[backup] data_backup_parameter_file = /usr/sap/FIV/SYS/global/hdb/opt/backint/backint-gcs/parameters.json parallel_data_backup_backint_channels = 4 catalog_backup_parameter_file = /usr/sap/FIV/SYS/global/hdb/opt/backint/backint-gcs/parameters.json log_backup_parameter_file = /usr/sap/FIV/SYS/global/hdb/opt/backint/backint-gcs/parameters.json log_backup_using_backint = true catalog_backup_using_backint = true
# Full data backup of FIV tenant to GCS
su - fivadm -c 'hdbsql -i 00 -d FIV -u SYSTEM -p <pwd> \
"BACKUP DATA USING BACKINT ('\''FULL_DATA_FIV_$(date +%Y%m%d_%H%M%S)'\'') \
COMMENT '\''Manual full backup'\''"'
# Backup SYSTEMDB
su - fivadm -c 'hdbsql -i 00 -d SYSTEMDB -u SYSTEM -p <pwd> \
"BACKUP DATA FOR SYSTEMDB USING BACKINT ('\''SYSTEMDB_$(date +%Y%m%d_%H%M%S)'\'') \
COMMENT '\''SYSTEMDB backup'\''"'
# Check last 5 backups
su - fivadm -c 'hdbsql -i 00 -u SYSTEM -p <pwd> \
"SELECT TOP 5 BACKUP_ID, ENTRY_TYPE_NAME, STATE_NAME, \
DESTINATION_TYPE_NAME, SYS_END_TIME, COMMENT \
FROM M_BACKUP_CATALOG ORDER BY SYS_END_TIME DESC"'
# Stop tenant before recovery
su - fivadm -c 'hdbsql -i 00 -d SYSTEMDB -u SYSTEM -p <pwd> \
"ALTER SYSTEM STOP DATABASE FIV"'
# Restore from backint by BACKUP_ID (preferred)
su - fivadm -c 'hdbsql -i 00 -d SYSTEMDB -u SYSTEM -p <pwd> \
"RECOVER DATA FOR FIV USING BACKUP_ID <backup_id> CLEAR LOG"'
# Restore from backint by prefix
su - fivadm -c 'hdbsql -i 00 -d SYSTEMDB -u SYSTEM -p <pwd> \
"RECOVER DATA FOR FIV USING BACKINT ('\''FULL_DATA_FIV_20260416'\'') CLEAR LOG"'
# Verify tenant is back
su - fivadm -c 'hdbsql -i 00 -u SYSTEM -p <pwd> \
"SELECT DATABASE_NAME, ACTIVE_STATUS FROM M_DATABASES"'
Weekly full data backup every Saturday 09:33 America/Los_Angeles (server timezone). Automatic log and catalog backups every minute, streaming to GCS via backint.
The Google Cloud Agent for SAP includes helper commands that replace manual symlink/wrapper/parameters.json work. Use these for new system setup and troubleshooting.
installbackint — installs wrapper, hdbbackint symlink, and starter parameters.json to /usr/sap/<SID>/SYS/global/hdb/opt/backint/backint-gcs/:
/usr/bin/google_cloud_sap_agent installbackint # Or specify SID (lowercase): /usr/bin/google_cloud_sap_agent installbackint -sid=fiv
configurebackint — safely edit parameters.json:
/usr/bin/google_cloud_sap_agent configurebackint \ -f="/usr/sap/FIV/SYS/global/hdb/opt/backint/backint-gcs/parameters.json" \ -bucket="sap-hana-backint"
status -f=backint — validates IAM + config (v3.7+). Run first when troubleshooting:
sudo /usr/bin/google_cloud_sap_agent status \ -b="/usr/sap/FIV/SYS/global/hdb/opt/backint/backint-gcs/parameters.json" \ -f="backint"
backint -f=diagnose — end-to-end self-test. Requires 18 GB free disk:
sudo /usr/bin/google_cloud_sap_agent backint \ -u=FIV \ -p="/usr/sap/FIV/SYS/global/hdb/opt/backint/backint-gcs/parameters.json" \ -f=diagnose
Our current config uses 4 parameters. The agent supports many more for performance tuning, encryption, retention, and system-copy workflows.
| Parameter | Default | Notes |
|---|---|---|
bucket | (required) | Target GCS bucket |
recovery_bucket | — | Separate bucket for restore (v3.1+) |
folder_prefix | — | Prefix inside bucket |
recovery_folder_prefix | — | Prefix for restore (v3.1+) |
shorten_folder_path | false | Shortens object paths (v3.3+) |
service_account_key | — | Required only off-Compute-Engine |
| Parameter | Default | Notes |
|---|---|---|
parallel_streams | 1 | 1-32. Prefer HANA's parallel_data_backup_backint_channels for data |
parallel_recovery_streams | 1 | 1-32 (v3.7+). Not with compressed backups |
xml_multipart_upload | false | Required if parallel_streams>1 (v3.2+) |
buffer_size_mb | 100 | Up to 250. Memory = buffer × streams |
rate_limit_mb | unlimited | Outbound bandwidth cap |
threads | nproc | Worker threads |
retries | 5 | GCS retry count |
retry_backoff_initial | 10s | Initial backoff |
retry_backoff_max | 300s | Max backoff |
retry_backoff_multiplier | 2.0 | Must be >1.0 |
| Parameter | Default | Notes |
|---|---|---|
compress | false | Google recommends against. We use true for storage cost |
log_to_cloud | true | Route logs to Cloud Logging |
log_level | INFO | DEBUG/INFO/WARNING/ERROR |
log_delay_sec | 60 | Progress log interval |
send_metrics_to_monitoring | true | Cloud Monitoring metrics (v3.3+) |
| Parameter | Default | Notes |
|---|---|---|
storage_class | bucket default | STANDARD/NEARLINE/COLDLINE/ARCHIVE (v3.2+) |
metadata | {X-Backup-Type:...} | Custom KV map (v3.3+) |
custom_time | — | RFC 3339 or UTCNow+Nd (v3.4+) |
client_endpoint | storage.googleapis.com | Rarely modified |
| Parameter | Default | Notes |
|---|---|---|
encryption_key | — | Path to AES-256 CSEK. Exclusive with kms_key, parallel_streams |
kms_key | — | projects/P/locations/L/keyRings/R/cryptoKeys/K. Exclusive with encryption_key, parallel_streams |
| Parameter | Default | Notes |
|---|---|---|
object_retention_time | — | RFC 3339 or UTCNow+Nd |
object_retention_mode | — | Locked/Unlocked |
| Parameter | Default | Notes |
|---|---|---|
file_read_timeout_ms | 60000 | Open file timeout |
diagnose_file_max_size_gb | 16 | Max file size for diagnose (v3.3+) |
diagnose_tmp_directory | /tmp/backint-diagnose | Temp dir (v3.3+) |
encryption_key, kms_key). parallel_recovery_streams incompatible with compress=true backups. recovery_bucket incompatible with CHECK ACCESS USING BACKINT SQL clause. Violations → agent exits with status 1.
HANA can use separate parameters.json files for data/log/catalog backups, each tuned differently. Our current config uses one shared file — works fine for this scale, but consider splitting for production tuning.
| Backup type | File | Typical tuning |
|---|---|---|
| Data | parameters_data.json | HANA's parallel_data_backup_backint_channels=8-16 in global.ini |
| Log | parameters_log.json | parallel_streams=8, xml_multipart_upload=true, smaller buffer_size_mb |
| Catalog | parameters_catalog.json | Same as log |
Configure in global.ini:
[backup] data_backup_parameter_file = /usr/sap/FIV/.../parameters_data.json log_backup_parameter_file = /usr/sap/FIV/.../parameters_log.json catalog_backup_parameter_file = /usr/sap/FIV/.../parameters_catalog.json
Full backint expert reference skill (Markdown) — covers architecture, parameters, CLI commands, tuning, and troubleshooting for both S/4HANA and BW/4HANA systems.
compress=false — higher CPU, lower throughput. We use compress=true deliberately for lower GCS cost.parallel_data_backup_backint_channels=8 (or 16) in global.ini [backup] instead of parallel_streams in parameters.json for data backups. FIV currently set to 4.parameters_log.json, not the shared file.AbortIncompleteMultipartUpload lifecycle rule (7 days) on the bucket.| Symptom | Fix |
|---|---|
| Backup fails with "opening hdbbackint" | Recreate hdbbackint symlink to wrapper |
| Agent "failed to authenticate" | Check SA key file permissions (600, fivadm:sapsys) |
| "bucket not found" | parameters.json must have exactly sap-hana-backint |
| Log backups not uploading | Set log_backup_using_backint = true in global.ini |
| Permission denied on backup dir | chown -R fivadm:sapsys /usr/sap/FIV/HDB00/backup |
| Agent not running | systemctl start google-cloud-sap-agent |
| Unsure if config / IAM is correct | Run google_cloud_sap_agent status -b=/path/parameters.json -f=backint (v3.7+) |
| Want to test GCS upload/download before HANA backup | Run google_cloud_sap_agent backint -u=FIV -p=/path -f=diagnose (needs 18 GB free) |
| Agent exits with status 1 immediately | Check parameters.json for incompatible options (parallel + encryption/retention, etc.) |
Each status card has a green Start button:
| Button | What it does |
|---|---|
| Start SAP | Runs startsap R3 as s4hadm — starts the ABAP application server |
| Start DB (FIV) | Runs HDB start as fivadm — starts the primary HANA instance |
| Start DB (PIT) | Runs HDB start as pitadm — starts the Cockpit HANA instance |
A confirmation dialog asks you to confirm before starting. If the service is already running, you will see an informational message instead of starting it again.
Each status card has a red Stop button. Stopping is a two-step authorization process:
| Button | What it does | Pre-check |
|---|---|---|
| Stop SAP | Runs stopsap R3 as s4hadm | If SAP is already down, returns info message |
| Stop DB (FIV) | Runs HDB stop as fivadm | If HANA is already down, returns info message |
| Stop DB (PIT) | Runs HDB stop as pitadm | If HANA is already down, returns info message |
The Disk Space card appears next to Server Details and shows all mounted filesystems with visual progress bars.
| Bar Color | Usage Level | Action |
|---|---|---|
| Green | Below 70% | Normal — no action needed |
| Yellow | 70% – 85% | Monitor — plan cleanup soon |
| Red | Above 85% | Critical — free space immediately |
Each filesystem row shows: mount point, usage percentage bar, total size, and available space. Click Refresh to update the data.
| Mount | Contains | Why it matters |
|---|---|---|
/hana/data | HANA database data files | If full, HANA stops accepting writes and may crash |
/hana/log | HANA transaction logs | If full, all database transactions halt |
/sap_backup | Local backup staging area | Backups fail if insufficient space |
/usr/sap | SAP binaries and work files | SAP cannot start if this is full |
The Links section provides quick access to external tools and documentation:
| Link | Description |
|---|---|
| SSO S/4HANA | SAP GUI Web access via Okta SAML single sign-on |
| HANA Cockpit | SAP HANA Database Administration Cockpit (port 51024). Manage databases, monitor performance, view alerts. |
| Web IDE | SAP HANA Web-based Development Workbench (port 8000). Credentials are auto-loaded from the encrypted vault — displayed next to the link. |
| VM Settings | Google Cloud Compute Engine instance details for the SAP server |
| DNS Registration | Google Cloud DNS zone for fivetran-internal-sales.com |
| Backint Backup Destination | GCS bucket where HANA backups are stored |
| Technical Details | Slab documentation for the S/4HANA demo environment |
The lower section of the Links card contains links to all SAP business process workflow documentation:
| Document | Process Flow |
|---|---|
| Order to Cash (OTC) | Sales Order → Delivery → Goods Issue → Billing → Payment |
| Procure to Pay (P2P) | Purchase Order → Goods Receipt → Invoice Verification → Payment |
| Plan to Produce (PP) | BOM → Routing → Production Order → Confirmation |
| MRP | PIR → MRP Run → Planned Orders → Purchase Requisitions |
| Housekeeping | Period Close, Number Ranges, Material Ledger, System Cleanup |
| CDS View Extraction | 8-Phase Pipeline: Dependencies → Metadata → SQL → BFS Chain |
The cockpit follows a strict no-hardcoded-passwords policy. All credentials are stored in the encrypted vault on the server and retrieved at runtime.
| Credential | How it's used | Vault access |
|---|---|---|
| HANA FIV (SAPHANADB) | Status checks, backup triggers, version queries | Auto-read (no password needed) |
| HANA PIT (SYSTEM) | Cockpit status checks, backup triggers | Auto-read (no password needed) |
| Web IDE credentials | Displayed next to Web IDE link on page load | Auto-read (no password needed) |
| SAP OS users (s4hadm, fivadm, pitadm) | Start/stop commands via su - | Auto-read (no password needed) |
| Master password | Required for all stop operations | Validates against vault encryption key |
This server is configured to send email via smtp2go relay using Postfix.
| Property | Value |
|---|---|
| MTA | Postfix (SUSE, lmdb maps) |
| Relay | mail.smtp2go.com:2525 |
| From Address | sapidess4@fivetran-internal-sales.com |
| Auth | SASL (credentials in vault key smtp2go) |
| TLS | Enabled (opportunistic) |
| Config | /etc/postfix/main.cf, /etc/postfix/sasl_passwd |
Send a test email:
echo "Test" | mailx -s "Test from sapidess4" -r "sapidess4@fivetran-internal-sales.com" recipient@email.com
| Problem | Cause | Solution |
|---|---|---|
| All status cards show "Error" | Web server API cannot reach sapidess4 | Check if sapidess4 VM is running in GCP. The web server (sapidesecc8) must be able to reach it. |
| SAP shows Inactive but HANA is Active | SAP application server is stopped | Click "Start SAP" to start the ABAP stack. The database must be running first. |
| Backup Now returns "Error" | HANA instance is down, or BACKINT agent not configured | Ensure the HANA instance is Active first. Check BACKINT configuration in /usr/sap/FIV/SYS/global/hdb/opt/ |
| Stop button says "Wrong master password" | Incorrect master password entered | Retry with the correct master password. Contact SAP Specialist team if forgotten. |
| Start/Stop button shows "Already running" or "Already stopped" | The service is already in the requested state | This is informational — no action needed. Click "Check Status" to confirm. |
| Disk Space shows "Error loading" | API endpoint unreachable or server restart needed | Try clicking "Refresh". If persistent, the gcs-explorer service on sapidesecc8 may need a restart. |
| Web IDE credentials not showing | Vault read failed or credentials not stored | Check that hana_ide_user and hana_ide_password exist in the vault. |
| Web IDE is very slow | HANA delta store fragmentation | A weekly delta merge cron runs Sundays at 02:00 on sapidess4. If slowness recurs mid-week, run ssh root@sapidess4 /usr/local/bin/hana_delta_merge.sh |
| Page returns 404 | HTML file missing from server | Re-deploy: scp SAP_S4HANA_2023.html root@sapidesecc8:/usr/sap/sap_skills/docs/ |
The cockpit page is a static HTML file served by the Python HTTPS server on sapidesecc8 (port 443). All dynamic data comes from API endpoints on the same server:
| API Endpoint | Method | Purpose |
|---|---|---|
/sap_skills/api/system_status | GET | SAP + HANA FIV + PIT status, tenant names, last backup dates |
/sap_skills/api/hana_version | GET | HANA version, OS version, tenant name |
/sap_skills/api/backup_running | GET | Check if any backup is currently in progress |
/sap_skills/api/hana_ide_credentials | GET | Web IDE user/password from vault |
/sap_skills/api/disk_space | GET | Filesystem usage (df -h) |
/sap_skills/api/hardware_info | GET | CPU count (nproc) and total memory (free -g) |
/sap_skills/api/trigger_backup | POST | Start BACKINT backup (target: fiv or pit) |
/sap_skills/api/hana_control | POST | Start/stop HANA (target + action + master password for stop) |
/sap_skills/api/sap_control | POST | Start/stop SAP application server (action + master password for stop) |
sapidesecc8 and connects to sapidess4 via SSH/hdbsql to execute commands. No credentials are hardcoded in the HTML — all sensitive data flows through the server-side vault.
The HANA SQL Console provides an interactive query interface directly from the cockpit. Queries are executed on the SAP HANA database via hdbsql on sapidess4.
| Property | Value |
|---|---|
| Tool | /usr/sap/S4H/hdbclient/hdbsql |
| Execution Host | sapidess4 (via SSH from sapidesecc8) |
| Output Format | CSV (-x flag) |
| Query Timeout | 120 seconds |
| API Endpoint | POST /sap_skills/api/hana_sql_execute |
| Authentication | No master password required — credentials retrieved from server-side vault |
| Database | Port | Available Users | Vault Key |
|---|---|---|---|
| FIV (Tenant) | 30015 | SYSTEM, SAPHANADB | sapidess4_hana → tenant "FIV" |
| SYSTEMDB | 39613 | SYSTEM | sapidess4_hana → tenant "PIT" |
CURRENT_SCHEMA via -Z flag-- List all databases
SELECT DATABASE_NAME, ACTIVE_STATUS FROM SYS.M_DATABASES
-- Check HANA version
SELECT VERSION FROM SYS.M_DATABASE
-- Top 10 largest tables in FIV
SELECT TOP 10 SCHEMA_NAME, TABLE_NAME, RECORD_COUNT, TABLE_SIZE
FROM SYS.M_CS_TABLES ORDER BY TABLE_SIZE DESC
-- Active connections
SELECT CONNECTION_ID, USER_NAME, CLIENT_IP, CONNECTION_STATUS
FROM SYS.M_CONNECTIONS WHERE CONNECTION_STATUS = 'RUNNING'
-- Memory usage
SELECT HOST, TOTAL_MEMORY_USED_SIZE / 1024 / 1024 / 1024 AS USED_GB,
ALLOCATION_LIMIT / 1024 / 1024 / 1024 AS LIMIT_GB
FROM SYS.M_HOST_RESOURCE_UTILIZATION