Databricks · Schema

ClusterDetails

AIAnalyticsApache SparkBig DataClean RoomsCloud ComputingDataData AnalyticsData EngineeringData GovernanceDelta LakeDelta SharingETLIdentity ManagementLakehouseMachine LearningMLflowModel ServingSecuritySQLUnity CatalogVector SearchVisualize

Properties

Name Type Description
cluster_id string The unique identifier of the cluster.
cluster_name string The human-readable name of the cluster.
spark_version string The runtime version of the cluster.
node_type_id string The node type for worker nodes.
driver_node_type_id string The node type for the Spark driver.
num_workers integer Number of worker nodes.
state string The current state of the cluster.
state_message string A message about the state of the cluster.
start_time integer The time the cluster was started in epoch milliseconds.
terminated_time integer The time the cluster was terminated in epoch milliseconds.
last_state_loss_time integer The time when the cluster driver last lost its state in epoch milliseconds.
last_activity_time integer The time of the last user activity on the cluster.
last_restarted_time integer The time the cluster was last restarted.
creator_user_name string The email of the user who created the cluster.
cluster_source string The source that created the cluster.
spark_conf object Spark configuration key-value pairs.
custom_tags object Tags applied to the cluster.
spark_env_vars object
autotermination_minutes integer Auto-termination idle timeout in minutes.
enable_elastic_disk boolean
instance_pool_id string
policy_id string
data_security_mode string
single_user_name string
runtime_engine string
default_tags object Default tags applied by Databricks.
cluster_log_status object
termination_reason object
disk_spec object
executors array
jdbc_port integer Port on the driver for JDBC/ODBC connections.
spark_context_id integer The canonical Spark context identifier.
View JSON Schema on GitHub