supabase Unauthorized API AccessDetected unauthorized access attempts to the API, indicating potential security threats.supabase Replication LagSignificant delay in database replication, which may affect data consistency.supabase Pod EvictionPods are being evicted due to resource constraints or node failures.supabase Service Dependency FailureA dependent service is failing, affecting the functionality of the primary service.supabase Service Latency SpikeSudden increase in service latency, potentially affecting user experience.supabase Database DeadlockDetected deadlocks in the database, which may affect transaction processing.supabase High I/O WaitExcessive I/O wait times, indicating potential disk or network bottlenecks.supabase Node Memory PressureA node is under memory pressure, affecting pod scheduling and performance.supabase High Swap UsageExcessive swap usage, which may degrade system performance.supabase Service Restart LoopA service is continuously restarting, indicating potential configuration or resource issues.supabase API Rate Limit ExceededAPI requests have exceeded the allowed rate limit, potentially affecting service availability.supabase Configuration DriftDetected changes in system configuration that deviate from the desired state.supabase Node Disk PressureA node is experiencing disk pressure, which may affect pod scheduling and performance.supabase Job FailureScheduled jobs or tasks have failed to execute successfully.supabase Service UnavailableA service is temporarily unavailable, possibly due to overload or misconfiguration.supabase High Load AverageThe system load average is higher than expected, indicating potential resource saturation.supabase High Network TrafficUnusually high network traffic, which may indicate a DDoS attack or misconfigured services.supabase Backup FailureScheduled backups have failed, risking data loss in case of system failures.supabase High LatencyIncreased response times for requests, which may impact user experience.supabase Unauthorized Access AttemptsMultiple failed login attempts detected, indicating potential security threats.supabase Pod CrashLoopBackOffA pod is repeatedly crashing and restarting, indicating issues with the application or configuration.supabase Certificate ExpirySSL/TLS certificates are nearing expiration, risking secure communication failures.supabase Node Not ReadyA node in the cluster is not ready, potentially due to resource constraints or failures.supabase Service DownA critical service is not responding, possibly due to crashes or network issues.supabase High Memory UsageThe memory consumption has surpassed the set limit, which may lead to performance degradation.supabase Disk Space LowThe available disk space is below the acceptable threshold, risking data write failures.supabase High CPU UsageThe CPU usage has exceeded the defined threshold, indicating potential over-utilization of server resources.supabase Database Connection ErrorsFrequent connection errors to the database, possibly due to network issues or misconfigurations.supabase High Error RateAn increased rate of errors in the application, indicating potential bugs or misconfigurations.supabase Slow Query ResponseQueries are taking longer than expected to execute, affecting application performance.OpenSearch Index Recovery FailureAn index recovery operation has failed, potentially due to resource constraints or configuration issues.OpenSearch Cluster Node JoinedA new node has joined the cluster, potentially affecting cluster balance.OpenSearch Index Read-Only ModeAn index has been set to read-only mode due to disk space issues.OpenSearch Node Heap Dump GeneratedA heap dump has been generated, indicating potential memory issues.OpenSearch Cluster Node LeftA node has unexpectedly left the cluster.OpenSearch Node Network Latency HighNetwork latency between nodes is higher than expected, impacting cluster performance.OpenSearch Cluster Node Disk FullA node's disk is full, preventing further data operations.OpenSearch Cluster State Update FailureThe cluster is unable to update its state due to resource constraints or configuration issues.OpenSearch Node Disk I/O HighDisk I/O operations on a node are consistently high, impacting performance.OpenSearch Indexing Throughput LowThe rate of indexing operations is lower than expected.OpenSearch Search Throughput LowThe rate of search operations is lower than expected.OpenSearch Node JVM Heap Pressure HighThe JVM heap pressure on a node is consistently high, indicating potential memory issues.OpenSearch Cluster Node Count LowThe number of nodes in the cluster is below the expected count.OpenSearch Snapshot FailureA snapshot operation has failed, potentially due to storage issues or configuration errors.OpenSearch Node Disk Watermark ExceededDisk usage on a node has exceeded the high watermark threshold.OpenSearch Index Shard Size LargeOne or more index shards have grown larger than the recommended size.OpenSearch Snapshot Duration HighSnapshot operations are taking longer than expected to complete.OpenSearch Cluster Rebalance FailureThe cluster is unable to rebalance shards due to resource constraints or configuration issues.OpenSearch Pending Tasks HighThere is a high number of pending tasks in the cluster, indicating potential bottlenecks.OpenSearch Cluster Shard Allocation FailureThe cluster is unable to allocate shards due to resource constraints or configuration issues.OpenSearch Search Latency HighSearch queries are taking longer than expected to complete.OpenSearch Indexing Latency HighIndexing operations are taking longer than expected.OpenSearch Frequent Garbage CollectionFrequent garbage collection events are occurring, impacting performance.OpenSearch High JVM Heap UsageThe JVM heap usage is consistently high, leading to potential garbage collection issues.OpenSearch Node Disk Usage HighThe disk usage on one or more OpenSearch nodes is above the threshold.OpenSearch Node Not ReachableAn OpenSearch node is not reachable or has been removed from the cluster.OpenSearch Cluster Status RedOne or more primary shards are unassigned in the OpenSearch cluster.OpenSearch Cluster Status YellowOne or more replica shards are unassigned in the OpenSearch cluster.OpenSearch High Memory UsageThe memory usage on the OpenSearch nodes is consistently above the threshold.OpenSearch High CPU UsageThe CPU usage on the OpenSearch nodes is consistently above the threshold.ClickHouse ClickHouseHighZooKeeperEphemeralNodeCountThe number of ephemeral nodes in ZooKeeper is too high, which can affect stability.ClickHouse ClickHouseHighZooKeeperWatchCountThe number of watches in ZooKeeper is too high, potentially affecting performance.ClickHouse ClickHouseHighZooKeeperNodeCountThe number of nodes in ZooKeeper is too high, which can affect performance.ClickHouse ClickHouseHighZooKeeperSessionCountThe number of ZooKeeper sessions is too high, potentially overloading the ZooKeeper cluster.ClickHouse ClickHouseHighZooKeeperRequestErrorsA high number of errors are occurring in requests to ZooKeeper, disrupting coordination.ClickHouse ClickHouseHighZooKeeperRequestLatencyRequests to ZooKeeper are experiencing high latency, affecting distributed operations.ClickHouse ClickHouseHighBackgroundTaskQueueSizeThe background task queue is too large, potentially delaying important maintenance tasks.ClickHouse ClickHouseHighMutationQueueSizeThe mutation queue size is too large, which can delay data updates.ClickHouse ClickHouseHighCompactionQueueSizeThe compaction queue size is too large, indicating delays in data compaction.ClickHouse ClickHouseHighPartCountInPartitionA partition has too many parts, which can degrade query performance.ClickHouse ClickHouseHighReplicaQueueSizeThe size of the replication queue is too large, which can delay data synchronization.ClickHouse ClickHouseHighNetworkErrorsA high number of network errors are occurring, which can disrupt data operations.ClickHouse ClickHouseHighDiskIOWaitDisk I/O wait times are high, indicating potential bottlenecks in disk operations.ClickHouse ClickHouseInsertFailureRateHighA high rate of insert failures is occurring, which can affect data ingestion.ClickHouse ClickHouseHighNetworkLatencyNetwork latency is high, affecting communication between ClickHouse nodes or clients.ClickHouse ClickHouseQueryFailureRateHighA high rate of query failures is occurring, indicating potential issues with queries or server stability.ClickHouse ClickHouseHighReplicaLagThe lag between replicas and the primary server is too high, risking data consistency.ClickHouse ClickHouseBackgroundMergesFailingBackground merge operations are failing, which can lead to performance issues.ClickHouse ClickHouseMergeTreePartCountHighThe number of parts in a MergeTree table is too high, which can degrade performance.ClickHouse ClickHouseTableNotReplicatedA table that should be replicated is not being replicated correctly.ClickHouse ClickHouseZooKeeperSessionExpiredThe session with ZooKeeper has expired, potentially disrupting distributed operations.ClickHouse ClickHouseHighWriteLatencyWrite operations are experiencing high latency, which can delay data ingestion.ClickHouse ClickHouseHighReadLatencyRead operations are experiencing high latency, affecting query performance.ClickHouse ClickHouseReplicaDownOne or more replicas are not reachable, which can affect data redundancy and availability.ClickHouse ClickHouseZooKeeperConnectionLossThe ClickHouse server has lost connection to ZooKeeper, affecting distributed coordination.ClickHouse ClickHouseHighMemoryUsageThe ClickHouse server is using an unusually high amount of memory, which could lead to performance degradation or crashes.ClickHouse ClickHouseHighCPUUsageThe CPU usage on the ClickHouse server is consistently high, indicating potential performance issues.ClickHouse ClickHouseTooManyConnectionsThe number of connections to the ClickHouse server has exceeded the configured limit.ClickHouse ClickHouseQueryTimeoutQueries are taking too long to execute and are timing out.ClickHouse ClickHouseDiskSpaceLowThe disk space on the ClickHouse server is running low, which could prevent new data from being written.ClickHouse ClickHouseReplicaLagOne or more replicas are lagging behind the primary server, which can lead to stale reads.Cassandra CassandraClusterWideLatencyHighHigh latency observed across the entire cluster, indicating potential systemic issues.Cassandra CassandraRepairFailuresFailures occurred during repair operations, potentially affecting data consistency.Cassandra CassandraNodeLoadImbalanceUneven data distribution across nodes, leading to load imbalance.Cassandra CassandraTableCompactionHighCompaction tasks for a specific table are taking longer than expected.Cassandra CassandraHintsDeliveryLatencyHighHint delivery is taking longer than expected, indicating potential network or node issues.Cassandra CassandraBatchLogReplayBatch log replay is occurring, indicating potential issues with batch operations.Cassandra CassandraCQLRequestsHighA high number of CQL requests are being processed, potentially overloading the node.Cassandra CassandraThriftRequestsHighA high number of Thrift requests are being processed, potentially overloading the node.Cassandra CassandraReadRepairFailuresFailures occurred during read repair operations.