Thursday, April 2, 2026

Confluent CLI + JQ JSON Parsing for Governance and Resource Monitoring

This post lists useful Confluent CLI commands combined with JQ (JSON parsing tool) to analyze and monitor Confluent Cloud resources such as API keys and Kafka topic partitions.

These commands help organizations manage quotas, track resource usage, and maintain governance in Confluent Cloud environments.


Confluent Cloud API Key Limits

Confluent Cloud limits the number of API keys that can be created per organization.
The official limits are documented here:
https://docs.confluent.io/cloud/current/quotas/service-quotas.html#core-resource-scopes


Monitoring API key usage is important because excessive API keys can:

  • increase security risks
  • complicate governance
  • reach organizational limits
  • create unnecessary operational overhead

List Confluent API Keys and Count by Resource Type

This command lists Confluent API keys and counts them by resource type.

confluent api-key list --output json \
| jq -r '.[] | .resource_type' \
| sort | uniq -c | sort -rn

What this does

  • Lists all API keys

  • Extracts resource type

  • Groups and counts usage

  • Sorts by highest usage

Example Output

120 kafka-cluster
40 environment
25 schema-registry
10 flink


This helps identify which resource types consume the most API keys.

Confluent Kafka Partition Limits

Confluent Cloud limits the number of partitions that can be allocated based on CKU (Confluent Kafka Unit).

In Dedicated clusters, each CKU provides approximately:

4500 partitions

You can check the official limits here:

https://docs.confluent.io/cloud/current/clusters/cluster-types.html#ecku-cku-comparison


Identify Partition Usage by Application Namespace

If your organization uses a single Kafka cluster with multiple application teams, you may want to identify which team or namespace is using the most partitions.

This can be achieved using Confluent CLI + JQ.


Assumption

This script assumes:

  • Topic naming follows a standard convention

  • Each application uses a unique namespace

  • Naming is aligned with Java package structure

Example

com.ibm.mq.* com.ibm.db2.* org.apache.flink.* com.xyz.orders.*

This allows grouping topics by namespace prefix.

Get Kafka Topics in JSON Format

confluent kafka topic list \
--cluster <YOUR_CLUSTER_ID> \
--environment <YOUR_ENV_ID> \
-o json

This command returns all Kafka topics in JSON format.

Parse Partition Usage by Namespace

confluent kafka topic list \
--cluster <YOUR_CLUSTER_ID> \
--environment <YOUR_ENV_ID> \
-o json \
| jq -r '
map({
prefix: (.name | split(".") | .[0:3] | join(".")),
partitions: .partition_count
})
| group_by(.prefix)
| map({
namespace: .[0].prefix,
totalPartitions: (map(.partitions) | add)
})
| sort_by(.totalPartitions)
| reverse
| .[:20]
| .[]
| "\(.namespace) \(.totalPartitions)"
' \
| tr -d '\r' \
| awk '{printf("%-30s %10s\n", $1, $2)}'

What This Script Does

Step-by-step

  • Retrieves Kafka topics in JSON format
  • Extracts namespace prefix from topic name
  • Groups topics by namespace
  • Sums partition counts
  • Sorts by highest usage
  • Displays top 20 namespaces
  • Aligns output for readability

Example Output

com.ibm.mq 8200
com.ibm.db2 6000
org.apache.flink 5200
com.xyz.orders 4100

This helps identify:

  • high partition consumers

  • over-utilized namespaces

  • teams consuming most cluster capacity

  • partition allocation distribution

Optional Component

Reverse and Top 20

| reverse
| .[:20]

This limits output to top 20 namespaces.

Optional if you want full list.


Why This is Useful

This approach helps organizations:

  • monitor Kafka partition usage

  • enforce governance

  • prevent CKU exhaustion

  • identify heavy users

  • plan cluster scaling

  • allocate partitions per team

  • optimize resource consumption

Especially useful in shared Confluent Cloud clusters.

 Azure CLI Commands for Role Assignment Analysis Using JSON and JQ

This post provides useful Azure CLI commands combined with JSON output and JQ to analyze role assignments and gather statistics, especially for Azure Event Hub (Kafka services) environments.

Azure Role Assignment Limit

Microsoft has a hard limit on the number of role assignments per subscription, which is currently set to 4000.

If roles are incorrectly assigned or if your company requires fine-grained access control on Event Hub topics and resources, you may run out of available role assignments within a subscription.

The following Azure CLI + JQ command helps you count role assignments by filtering only Azure Event Hub (Kafka-related) role assignments.

Command

az role assignment list --all --subscription <YOUR_SUBSCRIPTION> \
--query "[?contains(scope, 'Microsoft.EventHub/namespaces') && contains(scope, 'eventhubs/')]" \
-o json | jq '.[] | .roleDefinitionName' | sort | uniq -c | sort -rn

What this does

  • Lists all role assignments in the subscription

  • Filters Azure Event Hub namespace and eventhub scopes

  • Extracts role definition names

  • Counts role usage

  • Sorts roles by highest usage

This helps identify which roles consume the most assignments.


Get Azure Event Hub (Service Bus) Endpoints

This command retrieves Azure Event Hub namespace endpoints.


az eventhubs namespace list --subscription <YOUR_SUBSCRIPTION> \

| jq -r '.[].serviceBusEndpoint'


Azure Role Assignments Grouped by Provider

This command groups role assignments by Azure resource provider.

az role assignment list --all \
| jq '.[] | .id | split("providers")[1] | split("/")[1]' \
| sort | uniq -c | sort -rn


What this shows

  • Microsoft.EventHub

  • Microsoft.Storage

  • Microsoft.Compute

  • Microsoft.Network

This helps identify which Azure services consume the most role assignments.


Azure Role Assignments Based on Event Hub Naming Standards

Most organizations use naming standards such as:

com.xyz.abc.topic1
com.xyz.abc.topic2

If you need to list role assignments grouped by Event Hub naming pattern, you can use JQ and Unix commands.

Command

az role assignment list --all --subscription <YOUR_SUBSCRIPTION> --output json \
| jq '.[] | select(.id | contains("Microsoft.EventHub")) | .id | split("eventhubs")[1]' \
| tr -d '"' \
| tr -d '/' \
| cut -d. -f1-3 \
| sort | uniq -c | sort -rn

What this does

  • Filters Microsoft Event Hub role assignments

  • Extracts Event Hub name

  • Removes special characters

  • Groups by naming prefix

  • Counts occurrences

This helps:

  • Identify role assignment usage per domain

  • Detect over-provisioned topics

  • Optimize RBAC assignments


Azure Role Assignment Change Log

This command retrieves role assignment change logs within a given date range.

az role assignment list-changelogs \
--endtime 2025-12-31T01:01:00Z \
--start-time 2026-03-31T00:00:00Z \
| jq -r '.[].action' | sort | uniq -c


Output

120 Create
95 Delete
30 Update

This helps track:

  • RBAC changes

  • Audit activity

  • Role assignment growth

  • Governance monitoring


Login Using Azure Service Principal

Use a Service Principal to authenticate Azure CLI for automation or CI/CD pipelines.


az login \
--service-principal \
--username <APPLICATION_ID> \
--password <APPLICATION_SECRET> \
--tenant <TENANT_ID>

Useful for:

  • Automation scripts

  • CI/CD pipelines

  • Scheduled RBAC audits

  • Infrastructure monitoring


List Azure Subscriptions in Table Format

This command lists subscriptions in a clean table format.

az account list --query "[].{name:name, id:id}" -o tsv

Output

Production xxxxx-xxxx-xxxx
Development xxxxx-xxxx-xxxx
QA xxxxx-xxxx-xxxx

Useful for:

  • Multi-subscription environments

  • Governance checks

  • Automation scripting


These Azure CLI and JQ commands help organizations monitor role assignments, track RBAC usage, and avoid hitting Azure subscription limits.

They are particularly useful in environments using Azure Event Hub and Kafka, where topic-level access control can quickly consume role assignment limits.

Using these commands regularly helps maintain governance, reduce RBAC sprawl, and ensure efficient Azure resource management.



Thursday, May 2, 2024

Base 64 number system


Imagine you have to create unique sequence numbers but length should be less than 8 digits/characters,  in such case use Base 64 number system use the symbols 0-9, a-z, A-Z, -, _  (hyphen, underscore), Z represents 61, and - 62, and _ 63.


You can represent 

64 ^1 = 64
64^2  = 4096
64^3  = 262,144
...
..
64^8 = 281,474,976,710,656 (its 281 trillion, 474 billion, 976 million .... )

This technique used in shortened URL's and other places where we need to represent a large number with less digits/characters.

Tuesday, April 9, 2024

Maintain 99.999 up time

Often we hear or request to keep services up & running for 99.999 time.  How much maintenance window this gives if we have to maintain service uptime 99.999 time?

Its a simple math calculation ( 365 * 24 * 3600 ) = 31536000 seconds (not considering 365.25  or leap years)

To keep it 99.999%, we have a maintenance window 315.36 seconds, this gives roughly 5 minutes 15 seconds.  


Seconds in an hour  3600
Seconds in a day  86,400
Seconds in a week (7 * 24 * 3600 ) = 604800


 

Sunday, December 3, 2017

Creating J2CActivationSpec using jython for JDBC Inbound Adapter with event store

To create a J2C Activation Spec using Jython script you can use below AdminTask


adapter = AdminConfig.getid("/Cell:myCell/Node:MyNode/J2CResourceAdapter:IBM WebSphere Adapter for JDBC/')

AdminTask.createJ2CActivationSpec(adapterId, '[-messageListenerType com.ibm.j2ca.base.ExtendedInboundListener -name  MyAdapterListenerSpec -jndiName as/MyAdapterJndi -description "Adapter to pull events from inbound database table" -authenticationAlias  mydatbaseAuthAlias]')
AdminConfig.save()


Once you create the J2C Activation Spec, you will have to configure the custom properties, in case you are using a different table names other than defaults.  In order to do that you first have to find the J2EEResourceProperty and set the values.

To update eventTableName, (other values can be modified like eventTypeFilter, DatabaseVendor, connectionType, jdbcDriverClass, dataSourceJNDINane and many more), query and get the id first and use that to modify existing value.


eventTableName = AdminConfig.getid('/J2CActivationSpec:MyAdapterListenerSpec/J2EEResourceProperty:eventTableName/')
AdminConfig.modify(eventTableName , '[[name "eventTableName] [type "java.lang.String"] [description "eventTableName"] [ value "MyAdapterInboundTable"] [required "false"]]')
AdminConfig.save()


Setting up the custom J2EE resource property was challenging, once you know the patter to use to query and get id, we can use same pattern to setup custom properties.



Wednesday, March 2, 2016

XSD string pattern

Today I have come across a generated piece of XSD definition and it's restricted with a specific pattern

(0[4-9])|(1[0-9])|(2[0-6])

It puzzled me for a little time, and found the answer after little bit of testing.   What it means is any value within 04 - 26 is a valid value.

(0[4-9])    Any digit between 4 to 9 prefixed by zero
|  or
(1[0-9])    Any digit between 0 to 9 prefixed by one
| or
(2[0-6])    Any digit between 0 to 6 prefixed by two


IBM Business Process Manager Security Concepts & Guidance ... interesting read

While reading the red book from IBM ...
http://www.redbooks.ibm.com/redbooks/pdfs/sg248027.pdf

On page 49 " the cacerts file which ships with Business Process Manager has the Java-defined default password—“change it”. Ironically, even though the default password is “change it”, few ever do."

So if you are an administrator of a BPM, change it...


Tuesday, June 10, 2014

WebSphere ESB 7.5.1 application already installed exception, and unisntall fails with ADMA5108E



Today I stumbled upon quirky behavior on WID local test environment.  I am using WebSphere ESB 7.5.1.1 and WID 7.5.1.2; I was unable to deploy one mediation module and it fails with below exception

CWSCA3062E: The {0} Service Component Architecture (SCA) module is already installed on the system.  SCA module name must be unique.

when I attempt to uninstall I get below exception like

ADMA5108E:  Application {0} cannot be uninstalled because it does not exists in the WebSphere Application Server configuration.  

As a troubleshooting process I searched all files, could not find where the SCA modules info got stored, finally got a hit
/profiles//config/cells//cell-core.xml 

above file has an entry


Stop the server and remove the entry which is creating the trouble, and restart the server and deploy the application. At the time of deploy I got few FFDC's about pre-existing SI Bus resources. Ideally we should remove the SI B resources which are associated with troubled application, and make an attempt to deploy application. However the good news is server was able to recover the exception of pre existing SIBus resources, and created new set and deployed application.

Thursday, May 8, 2014

WebSphere Integration Developer BOXMLSerializer writewithOptions

Hi;

The default BOXMLSerializer writes XML with spaces and tabs; to remove tabs/spaces you can use below code.

BOXMLSerializer bos = (BOXMLSerializer) ServiceManager.INSTANCE.locateService
("com/ibm/websphere/bo/BOXMLSerializer");

Map options = new HashMap(); options.put
(XMLResource.OPTION_FORMATTED, Boolean.FALSE);

bos.writeDataObjectWithOptions(dataObject, targetNamespace, element, options);

Tuesday, March 19, 2013

IID 7.5 install on Windows 7 with domain account

When you install IID (IBM Integration Designer 7.5) on widnows 7 with a domain account, you will encounter an error while creating databases or tables.  This is because DB2 express can not lookup domain user id as a authorized id, even after you add your domain id to local DB2ADMINS.

You have instruct DB2 to pick local id's be running below commands


    db2set DB2_GRP_LOOKUP=LOCAL,TOKENLOCAL         
    db2 update dbm cfg using sysadm_group DB2ADMNS 
    db2stop                                        
    db2start   

authorized  Please see the further details in IBM Tech note SWG21504375 





Thursday, January 10, 2013

WebSphere Integration Developer upgrade issues missing business integration perspective


After upgrading WID from 7.0.0.x to later versions, some times you may miss Business Integration perspective.  In that case you try below option
  1. Open <WID Install Location>\configurations\config.ini
  2. Find org.eclipse.update.reconcile on config.ini and make sure that value is ‘true’.
  3. Start WID
You can edit org.eclipse.update.reconcile value to false after a successful WID start.

Tuesday, October 30, 2012

SRVE0255E: A WebGroup/Virtual Host to handle /ibm/console has not been defined

While doing some experiments with console, my server got corrupted and I was unable to login back on console.  Every time I try to access I get below error

SRVE0255E: A WebGroup/Virtual Host to handle /ibm/console has not been defined.

SRVE0255E: A WebGroup/Virtual Host to handle localhost:9062 has not been defined.

IBM WebSphere Application Server

I was able resolve this by reinstalling the admin console app.  In command prompt change directory to specific profile bin directory (if profile name is qcell  cd  to qcell/bin), and issue below command

wsadmin.sh -lang jython -f deployConsole.py remove

after successful removal of the application install with below command

wsadmin.sh -lang jython -f deployConsole.py install 

if prompts, give username and password, and restart server and try accessing the admin console.




  

Friday, September 28, 2012

WID runtime environment configurations and workspace settings

Once in a while workspaces in WID fail to get runtime environment, even when everything looks correct.  Going into project facets and clicking on runtimes would show the problem of unresolved WPS runtime environment.

Most of the time if you create a new workspace, it would solve the problem, just create a new workspace and import projects again.  

Each workspace stores runtime information in  .metadata folder ...

\.metadata\.plugins\org.eclipse.core.runtime\.settings\org.eclipse.wst.server.core.prefs file keeps the runtime information.  



Monday, February 20, 2012

Get BPEL application name BPEL template details

Once in a while, you may want to deploy multiple copies of the same application on server, in that case you can use serviceDeploy -uniqueCellID to create multiple copies of application and deploy on the same server. But all that applications uses the same BPEL template, in such case you may want to print which application is getting executed by printing additional details.

Below snippet can be used to print details of the process template and associated details.

Pass template name in API to get further details...








 
try {
    javax.naming.Context ctx = new javax.naming.InitialContext();
    Object obj = ctx.lookup("local:ejb/com/ibm/bpe/api/BusinessFlowManagerHome");
    LocalBusinessFlowManagerHome fmh = (LocalBusinessFlowManagerHome) javax.rmi.PortableRemoteObject.narrow(obj,com.ibm.bpe.api.LocalBusinessFlowManagerHome.class);
    BusinessFlowManagerService bfm = fmh.create();

    ProcessTemplateData ptd = bfm.getProcessTemplate("testBPEL");

    System.out.println("Process Template ID : " + ptd.getID());
    System.out.println("Application Name : " + ptd.getApplicationName());
    SimpleDateFormat formatter = new SimpleDateFormat("E, y-M-d 'at' hh:mm:ss.sss a zzz");
    System.out.println("Creation time : " + formatter.format(ptd.getCreationTime().getTime()));
    System.out.println("Valid from : " + formatter.format(ptd.getValidFromTime().getTime()));
    System.out.println("State : " + ptd.getState());
    System.out.println("" );
 }catch (Exception e)
{e.printStackTrace();}










Friday, November 11, 2011

Access Twitter from AIX command line



Very interesting article on IBM DeveloperWorks on how to access twitter from AIX command line using Ruby.

Accessing Twitter from the command line

Tuesday, August 23, 2011

WMQ channels monitoring

I have come across a situation wherein I have to monitor any dead channels; Using support pack MO71 & filters, you can develop a simple monitor quickly; Using MO71 support pack auto refresh, filters will scan WMQ channels automatically at specific intervals and write info to a file.

Below is a sample script to report channels which are open but not received a message more than few hours.

This Script opens a file in C:\IBM\WMQ\tools\mo71\temp\Channel_Status.log and appends the information with details.


@ChlStartTime := mqtime(CHSTADA,CHSTATI);
@LstMsgTime := mqtime(LSTMSGDA,LSTMSGTI);
@diffTime := @LstMsgTime - @ChlStartTime;
if (@diffTime > 3600) {
@fd := fopen("C:\\IBM\\WMQ\\tools\\mo71\\temp\\Channel_Status.log","a");
fprintf(@fd, "%-17s\t", date$(_time));
fprintf(@fd, "%-20s\t", _qmname);
fprintf(@fd, "%-20s %5s %5s \t",CHANNAME,STATUS,MCASTAT);
fprintf(@fd, "%-15s %-11s %-10s ",CONNAME,CHSTADA,CHSTATI);
fprintf(@fd, "%-11s %-10s ",LSTMSGDA,LSTMSGTI);
fprintf(@fd, "%-5s %-5s %20s ",CHLTYPE,SUBSTATE,JOBNAME);
fprintf(@fd, "\tChannel Start Time = %s\tLast Message Received Time = %s Difference in time = %s\t\t %s Days opened \n", @ChlStartTime,@LstMsgTime ,@diffTime, (@diffTime/86400) );
fclose(@fd);
} else {
csl(1,info,"All channel connections are good");
}



You can get more details on this WebSphere MQ MO71 support pack from IBM.



Tuesday, June 14, 2011

WebSphere Integration Developer 7.0 upgrade

After installing the WebSphere Integration Developer 7.0, when you attempt to update WID using IBM Installation Manager you will see a warning message that there is an unsupported fix pack installed; you must remove the fix pack before you can update/upgrade.







To remove the fix pack you will have to use IBM Update Installer; a different product to update WID.

Please visit IBM tech note on this

Monday, June 13, 2011

WebSphere MQ for Windows GUI administrator

Recently I have to browse a queue with more than 8000 messages in it and reload messages from a back-out queue to application queue. With "WebSphere MQ Explorer you can browse at most 5000 messages. If you ever need to examine a message contents which are in position greater than 5000; you have left with no choice with the standard tools shipped with WebSphere MQ.

M071 will save your day with build in queue load/unload utility a.k.a MO03


Please visit the inline links and download support packs from IBM.

Wednesday, February 9, 2011

WebSphere MQ on Windows, MaxChannels setting in registry

WebSphere MQ on Windows uses registry for queue manager configuration. I was looking for MaxChannels in my queue manager registry and didn't find a corresponding entry. I have modified the MaxChannels to 10 using WebSphere MQ Explorer, and registry entry appeared automatically.

C:>reg query HKLM\SOFTWARE\IBM\MQSeries\CurrentVersion\Configuration\QueueManager\testQM /s

! REG.EXE VERSION 3.0

HKEY_LOCAL_MACHINE\SOFTWARE\IBM\MQSeries\CurrentVersion\Configuration\QueueManager\testQM
Name REG_SZ testQM
Prefix REG_SZ C:\IBM\WMQ
Directory REG_SZ testQM

HKEY_LOCAL_MACHINE\SOFTWARE\IBM\MQSeries\CurrentVersion\Configuration\QueueManager\testQM\CHANNELS
MaxChannels REG_SZ 10


I guess default values not stored in registry. This should have been better documented in IBM Infocenter.

Wednesday, January 12, 2011

WebSphere MQ V7 and Windows 64 bit

On Windows, MQ runs in 32 bit mode; but it supports both 64 bit and 32 bit applications. You need to make sure that you are using correct bindings to connect with WMQ; more on this subject visit WebSphere MQ infocenter.


To change WebSphere MQ registry entries look in (click on WOW6432Node to know more on registry in windows 2008/Windows 7 64 bit os versions).

HKEY_LOCAL_MACHINE\SOFTWARE\WOW6432Node\IBM\MQSERIES\CurrentVersion\Configuration\

Example:

C:\WMQ\bin>reg query HKEY_LOCAL_MACHINE\Software\Wow6432Node\IBM\MQSeries\CurrentVersion\Configuration\QueueManager\TEST\Log
HKEY_LOCAL_MACHINE\Software\Wow6432Node\IBM\MQSeries\CurrentVersion\Configuration\QueueManager\TEST\Log

LogPrimaryFiles REG_SZ 3
LogSecondaryFiles REG_SZ 3
LogFilePages REG_SZ 4096
LogType REG_SZ CIRCULAR
LogBufferPages REG_SZ 0
LogPath REG_SZ C:\WMQ\log\TEST\
LogWriteIntegrity REG_SZ TripleWrite