Thursday, April 2, 2026

Confluent CLI + JQ JSON Parsing for Governance and Resource Monitoring

This post lists useful Confluent CLI commands combined with JQ (JSON parsing tool) to analyze and monitor Confluent Cloud resources such as API keys and Kafka topic partitions.

These commands help organizations manage quotas, track resource usage, and maintain governance in Confluent Cloud environments.


Confluent Cloud API Key Limits

Confluent Cloud limits the number of API keys that can be created per organization.
The official limits are documented here:
https://docs.confluent.io/cloud/current/quotas/service-quotas.html#core-resource-scopes


Monitoring API key usage is important because excessive API keys can:

  • increase security risks
  • complicate governance
  • reach organizational limits
  • create unnecessary operational overhead

List Confluent API Keys and Count by Resource Type

This command lists Confluent API keys and counts them by resource type.

confluent api-key list --output json \
| jq -r '.[] | .resource_type' \
| sort | uniq -c | sort -rn

What this does

  • Lists all API keys

  • Extracts resource type

  • Groups and counts usage

  • Sorts by highest usage

Example Output

120 kafka-cluster
40 environment
25 schema-registry
10 flink


This helps identify which resource types consume the most API keys.

Confluent Kafka Partition Limits

Confluent Cloud limits the number of partitions that can be allocated based on CKU (Confluent Kafka Unit).

In Dedicated clusters, each CKU provides approximately:

4500 partitions

You can check the official limits here:

https://docs.confluent.io/cloud/current/clusters/cluster-types.html#ecku-cku-comparison


Identify Partition Usage by Application Namespace

If your organization uses a single Kafka cluster with multiple application teams, you may want to identify which team or namespace is using the most partitions.

This can be achieved using Confluent CLI + JQ.


Assumption

This script assumes:

  • Topic naming follows a standard convention

  • Each application uses a unique namespace

  • Naming is aligned with Java package structure

Example

com.ibm.mq.* com.ibm.db2.* org.apache.flink.* com.xyz.orders.*

This allows grouping topics by namespace prefix.

Get Kafka Topics in JSON Format

confluent kafka topic list \
--cluster <YOUR_CLUSTER_ID> \
--environment <YOUR_ENV_ID> \
-o json

This command returns all Kafka topics in JSON format.

Parse Partition Usage by Namespace

confluent kafka topic list \
--cluster <YOUR_CLUSTER_ID> \
--environment <YOUR_ENV_ID> \
-o json \
| jq -r '
map({
prefix: (.name | split(".") | .[0:3] | join(".")),
partitions: .partition_count
})
| group_by(.prefix)
| map({
namespace: .[0].prefix,
totalPartitions: (map(.partitions) | add)
})
| sort_by(.totalPartitions)
| reverse
| .[:20]
| .[]
| "\(.namespace) \(.totalPartitions)"
' \
| tr -d '\r' \
| awk '{printf("%-30s %10s\n", $1, $2)}'

What This Script Does

Step-by-step

  • Retrieves Kafka topics in JSON format
  • Extracts namespace prefix from topic name
  • Groups topics by namespace
  • Sums partition counts
  • Sorts by highest usage
  • Displays top 20 namespaces
  • Aligns output for readability

Example Output

com.ibm.mq 8200
com.ibm.db2 6000
org.apache.flink 5200
com.xyz.orders 4100

This helps identify:

  • high partition consumers

  • over-utilized namespaces

  • teams consuming most cluster capacity

  • partition allocation distribution

Optional Component

Reverse and Top 20

| reverse
| .[:20]

This limits output to top 20 namespaces.

Optional if you want full list.


Why This is Useful

This approach helps organizations:

  • monitor Kafka partition usage

  • enforce governance

  • prevent CKU exhaustion

  • identify heavy users

  • plan cluster scaling

  • allocate partitions per team

  • optimize resource consumption

Especially useful in shared Confluent Cloud clusters.

 Azure CLI Commands for Role Assignment Analysis Using JSON and JQ

This post provides useful Azure CLI commands combined with JSON output and JQ to analyze role assignments and gather statistics, especially for Azure Event Hub (Kafka services) environments.

Azure Role Assignment Limit

Microsoft has a hard limit on the number of role assignments per subscription, which is currently set to 4000.

If roles are incorrectly assigned or if your company requires fine-grained access control on Event Hub topics and resources, you may run out of available role assignments within a subscription.

The following Azure CLI + JQ command helps you count role assignments by filtering only Azure Event Hub (Kafka-related) role assignments.

Command

az role assignment list --all --subscription <YOUR_SUBSCRIPTION> \
--query "[?contains(scope, 'Microsoft.EventHub/namespaces') && contains(scope, 'eventhubs/')]" \
-o json | jq '.[] | .roleDefinitionName' | sort | uniq -c | sort -rn

What this does

  • Lists all role assignments in the subscription

  • Filters Azure Event Hub namespace and eventhub scopes

  • Extracts role definition names

  • Counts role usage

  • Sorts roles by highest usage

This helps identify which roles consume the most assignments.


Get Azure Event Hub (Service Bus) Endpoints

This command retrieves Azure Event Hub namespace endpoints.


az eventhubs namespace list --subscription <YOUR_SUBSCRIPTION> \

| jq -r '.[].serviceBusEndpoint'


Azure Role Assignments Grouped by Provider

This command groups role assignments by Azure resource provider.

az role assignment list --all \
| jq '.[] | .id | split("providers")[1] | split("/")[1]' \
| sort | uniq -c | sort -rn


What this shows

  • Microsoft.EventHub

  • Microsoft.Storage

  • Microsoft.Compute

  • Microsoft.Network

This helps identify which Azure services consume the most role assignments.


Azure Role Assignments Based on Event Hub Naming Standards

Most organizations use naming standards such as:

com.xyz.abc.topic1
com.xyz.abc.topic2

If you need to list role assignments grouped by Event Hub naming pattern, you can use JQ and Unix commands.

Command

az role assignment list --all --subscription <YOUR_SUBSCRIPTION> --output json \
| jq '.[] | select(.id | contains("Microsoft.EventHub")) | .id | split("eventhubs")[1]' \
| tr -d '"' \
| tr -d '/' \
| cut -d. -f1-3 \
| sort | uniq -c | sort -rn

What this does

  • Filters Microsoft Event Hub role assignments

  • Extracts Event Hub name

  • Removes special characters

  • Groups by naming prefix

  • Counts occurrences

This helps:

  • Identify role assignment usage per domain

  • Detect over-provisioned topics

  • Optimize RBAC assignments


Azure Role Assignment Change Log

This command retrieves role assignment change logs within a given date range.

az role assignment list-changelogs \
--endtime 2025-12-31T01:01:00Z \
--start-time 2026-03-31T00:00:00Z \
| jq -r '.[].action' | sort | uniq -c


Output

120 Create
95 Delete
30 Update

This helps track:

  • RBAC changes

  • Audit activity

  • Role assignment growth

  • Governance monitoring


Login Using Azure Service Principal

Use a Service Principal to authenticate Azure CLI for automation or CI/CD pipelines.


az login \
--service-principal \
--username <APPLICATION_ID> \
--password <APPLICATION_SECRET> \
--tenant <TENANT_ID>

Useful for:

  • Automation scripts

  • CI/CD pipelines

  • Scheduled RBAC audits

  • Infrastructure monitoring


List Azure Subscriptions in Table Format

This command lists subscriptions in a clean table format.

az account list --query "[].{name:name, id:id}" -o tsv

Output

Production xxxxx-xxxx-xxxx
Development xxxxx-xxxx-xxxx
QA xxxxx-xxxx-xxxx

Useful for:

  • Multi-subscription environments

  • Governance checks

  • Automation scripting


These Azure CLI and JQ commands help organizations monitor role assignments, track RBAC usage, and avoid hitting Azure subscription limits.

They are particularly useful in environments using Azure Event Hub and Kafka, where topic-level access control can quickly consume role assignment limits.

Using these commands regularly helps maintain governance, reduce RBAC sprawl, and ensure efficient Azure resource management.