ClusterMetadata

class kafka.cluster.ClusterMetadata(**configs)[source]

A class to manage kafka cluster metadata.

This class does not perform any IO. It simply updates internal state given API responses (MetadataResponse, GroupCoordinatorResponse).

Keyword Arguments:
retry_backoff_ms (int): Milliseconds to backoff when retrying on
errors. Default: 100.
metadata_max_age_ms (int): The period of time in milliseconds after
which we force a refresh of metadata even if we haven’t seen any partition leadership changes to proactively discover any new brokers or partitions. Default: 300000
add_group_coordinator(group, response)[source]

Update with metadata for a group coordinator

Arguments:
group (str): name of group from GroupCoordinatorRequest response (GroupCoordinatorResponse): broker response
Returns:
bool: True if metadata is updated, False on error
add_listener(listener)[source]

Add a callback function to be called on each metadata update

available_partitions_for_topic(topic)[source]

Return set of partitions with known leaders

Arguments:
topic (str): topic to check for partitions
Returns:
set: {partition (int), …}
broker_metadata(broker_id)[source]

Get BrokerMetadata

Arguments:
broker_id (int): node_id for a broker to check
Returns:
BrokerMetadata or None if not found
brokers()[source]

Get all BrokerMetadata

Returns:
set: {BrokerMetadata, …}
coordinator_for_group(group)[source]

Return node_id of group coordinator.

Arguments:
group (str): name of consumer group
Returns:
int: node_id for group coordinator
failed_update(exception)[source]

Update cluster state given a failed MetadataRequest.

leader_for_partition(partition)[source]

Return node_id of leader, -1 unavailable, None if unknown.

partitions_for_broker(broker_id)[source]

Return TopicPartitions for which the broker is a leader.

Arguments:
broker_id (int): node id for a broker
Returns:
set: {TopicPartition, …}
partitions_for_topic(topic)[source]

Return set of all partitions for topic (whether available or not)

Arguments:
topic (str): topic to check for partitions
Returns:
set: {partition (int), …}
refresh_backoff()[source]

Return milliseconds to wait before attempting to retry after failure

remove_listener(listener)[source]

Remove a previously added listener callback

request_update()[source]

Flags metadata for update, return Future()

Actual update must be handled separately. This method will only change the reported ttl()

Returns:
kafka.future.Future (value will be the cluster object after update)
topics(exclude_internal_topics=True)[source]

Get set of known topics.

Arguments:
exclude_internal_topics (bool): Whether records from internal topics
(such as offsets) should be exposed to the consumer. If set to True the only way to receive records from an internal topic is subscribing to it. Default True
Returns:
set: {topic (str), …}
ttl()[source]

Milliseconds until metadata should be refreshed

update_metadata(metadata)[source]

Update cluster state given a MetadataResponse.

Arguments:
metadata (MetadataResponse): broker response to a metadata request

Returns: None

with_partitions(partitions_to_add)[source]

Returns a copy of cluster metadata with partitions added