How to Run Your Own Kafka Throughput Comparison Test

We recently published a blog post on the results of a comparison test of Kafka throughput via Internet VPN versus PacketFabric Cloud Router private connectivity. This is a companion post that spells out how you can run this test for yourself.

Prerequisites and assumptions

You will need an Azure VNET and AWS VPC to run these tests. We will not be giving step-by-step instructions for creating an Azure VNET or AWS VPC in this document, as these are well documented by both vendors.


You will need 2  VMs on Azure.  “Standard F8s_v2 (8 vcpus, 16 GiB memory)”  instances were used for this demo.  These are used to create:

  • ETL pipeline VM
  • iperf3 VM


You have access to a PacketFabric Portal account. Feel free to visit us at to learn more.


We will implement the setup used in AWS “Managed Streaming for Kafka Tutorial.

Implement Kafka tutorial configurations on AWS

Steps from AWS Managed Streaming for Kafka Tutorial:

  • Step 1: Create a VPC for Your MSK Cluster
  • Step 2: Enable High Availability and Fault Tolerance
  • Step 3: Create an Amazon MSK Cluster
  • Step 4: Create a Client Machine
  • Step 5: Create a Topic
  • Step 6: Produce and Consume Data

For reference, the MSK Tutorial creates an “m5 large” instance for the cluster.

Keep the client machine from Step4. It will be used for VM to VM iperf testing.
Note the Kafka version (2.2 for this article).
Note the AWS MSK topic name created. This tutorial uses “AWS_MSK_CR”.

Install and configure test tools on Azure

  • Enable following changes to Azure VNET Security Group
    • Allow ingress SSH.
    • Allow ingress TCP 19000 (pipeline tool).

Figure 5 – Enabling applicable test traffic in the Azure VNET security group.


  • Install iperf3 on a VM in each platform using apt/yum.
sudo apt -y install iperf3

Pipeline software

To install the pipeline software:

  • SSH into VM
  • Install docker and run a bespoke setup of StreamSets Datacollector:
    • Runs StreamSets Datacollector
    • Has Kafka libraries pre-installed
    • Has pipelines pre-installed
curl -fsSL -o
sudo sh
docker run -itd -p 19000:18630 alagalah/kafkatest
Figure 6 – Pipeline console.
  • Retrieve Kafka address from AWS MSK.
Figure 7 – IP Address association for Kafka broker.

(To enable Client VPC IP address, select the “Preferences” cog directly above Brokers and enable the field using a toggle switch.)

A single broker is used in this example, This broker was chosen as it is on the same subnet as the AWS VM created in the AWS MSK tutorial above.

The private IP address is used to ensure that traffic is not “accidentally” going over the public Internet without using the site-to-site VPN OR alternatively, PacketFabric’s private Network as a Service (NaaS).

  • Select each pipeline by using the hyperlink which is the pipeline name.
  • Select the Kafka “stage” and enter the broker IP and the Kafka Topic created in the AWS MSK tutorial.
    • In this case, and AWS_MSK_CR.
Figure 8 – Configuring the Kafka pipeline.

Configure Case1: Site-to-site VPN networking

  • In Azure, create a virtual network gateway.
Figure 9 – Creating a VNET virtual gateway in Azure (vnet-vpn-gw).
  • Create virtual network gateway
    • Select a subnet from the existing CIDR block. For this example, theVNET uses, so we selected
Figure 10 – Selecting a subnet for the virtual network gateway in Azure (
After creation, note the public IP address, as this will be needed on the AWS side. It can be found in your Resource Group under “vnet-vpn-gw” if using the same names per example.
  • Create an AWS Virtual Private Gateway from the VPC page in the AWS Console.
    • Attach the VPN Gateway to a VPC by selecting Actions -> Attach to VPC 
    • Use the same VPC which is in use by the AWS MSK cluster
Figure 11 – Creating a VPN gateway in AWS.
  • Create a site-to-site VPN Connection from the same page.
    • Use the Azure VNET prefix in your static route.
Figure 12 – Creating a VPN Gateway in AWS with a static route pointing at Azure VNET address space.
  • Download configuration file using “Generic” options.
Figure 13 – Downloading the VPN Gateway configuration in AWS.
  • Find the assigned outside IP address inside the configuration file. In this example,
Figure 14 – Finding the assigned AWS VPN Gateway outside IP address.
  • Configure Azure Local Network Gateway.
    • IP address is the public address from AWS site-to-site VPN configuration above.
    • Address space is the IP prefix for AWS VPC. i.e. static route from Azure to AWS.
Figure 15 – Configuring the Azure local network gateway with the outside IP of the AWS gateway and static route to the AWS VPC IP address space.
  • Create Azure connection
    • When prompted, select the local network gateway and virtual network gateway that was just created.
Figure 16 – Create the Azure connection.
  • Validate connection is established in Azure.
Figure 17 – Validating the Azure connection (Connected) in Microsoft Azure.
  • Validate connection and routing in AWS
    • Only one tunnel was used for this experiment. Please follow AWS recommendations in production.
Figure 18 – Validating connection and routing of AWS VPN connection in AWS.
  • Check AWS and Azure routing tables for respective VPCs.  We expect the route in AWS (learned from Azure peer) and in Azure (learned from the AWS peer.
Figure 19 – Checking the routing table in the AWS VPC.
Figure 20 – Pinging the AWS VM from Azure VM.
  • AWS VM
Figure 21 – Pinging the Azure VM from AWS VM.
  • Note static route not in AWS VPC routing table

Configure Case2: PacketFabric Cloud Router Networking

In this section, we have linked to PacketFabric Cloud Router processes to save space.  The figures shown are the results of each step.

Figure 22 – Creating a Packetfabric Cloud Router.

Process overview:

The basic steps to adding an AWS connection to a PacketFabric Distributed Cloud Router are as follows:1.1

1. From the PacketFabric side: Create a cloud connection.

  • After following the steps, the portal shows:
Figure 23 – A successfully created Packetfabric Cloud Router.

2. From the AWS side: Accept the connection.

Figure 24 – Accepted Cloud Router connection in AWS Direct Connect.

3. From the AWS side: Create a gateway.

Figure 25 – A created transit virtual interface (VIF) in the AWS account console.
Figure 26 – An Direct Connect gateway with our attached VIF.
Figure 27 – Confirming the gateway association of the virtual interface.
Figure 28 – The Transit Gateway is now available.

4. From the AWS side: Create and attach a VIF.

  • Specifically, a Transit VIF is used
Figure 29 – Confirm the Transit VIF is attached to the gateway.

5. From the PacketFabric side: Configure BGP.

Figure 30 – Configuring BGP in Packetfabric Cloud Router.
Figure 31 – Making sure we’re not blocking any prefix exchange from our endpoints.

Process overview

The basic steps to adding an Azure connection to a PacketFabric Cloud Router are as follows:

1. From the Microsoft side: Create an ExpressRoute circuit in the Azure Console.

Figure 32 – Creating an Azure ExpressRoute circuit in Azure.

2. From the PacketFabric side: Create a Cloud Router connection.

Figure 33 – Create a Cloud Router connection to the Azure service.

3. From both sides: Configure BGP.

Figure 34 – Configuring the BGP peering with Azure from Cloud Router side.
Figure 35 – Configuring BGP peering with Packetfabric Cloud Router from the Azure side.

4. From the Microsoft side: Create a virtual network gateway for ExpressRoute.

Figure 36 – Creating a virtual network gateway for ExpressRoute.

5. From the Microsoft side: Link a virtual network gateway to the ExpressRoute circuit.

Figure 37 – Linked virtual network gateway to ExpressRoute circuit.
  • Check routing.
Figure 38 – Checking the routes learned from the AWS peer in the Packetfabric Cloud Router console.
Figure 39 – Checking the routes learned from the Azure peer in the Packetfabric Cloud Router console.
Figure 40 – Checking the routes in the AWS VPC Transit Gateway.
Figure 42 – Checking received and advertised routes for our Azure peer on the Packetfabric Cloud Router console.

Test data pipeline

To generate transactional workload, a simple ETL pipeline is used. The open-source ETL software, StreamSets Datacollector, is used for this purpose. 

The pipeline:

  • generates records and sends to AWS MSK via Kafka Producer
  • Kafka Consumer reads from AWS MSK and sends to Trash


  • Run one iperf3 instance per vCPU available on server (AWS) and client (Azure) VMs
# AWS server

sudo iperf3 -s -p 3000 &
sudo iperf3 -s -p 3001 &
sudo iperf3 -s -p 3002 &
sudo iperf3 -s -p 3003 &
sudo iperf3 -s -p 3004 &
sudo iperf3 -s -p 3005 &
sudo iperf3 -s -p 3006 &
sudo iperf3 -s -p 3007 &

# Azure client

sudo iperf3 -c -i 1 -t 3600 -p 3000  -b 1000000000000 &
sudo iperf3 -c -i 1 -t 3600 -p 3001  -b 1000000000000 &
sudo iperf3 -c -i 1 -t 3600 -p 3002  -b 1000000000000 &
sudo iperf3 -c -i 1 -t 3600 -p 3003  -b 1000000000000 &
sudo iperf3 -c -i 1 -t 3600 -p 3004  -b 1000000000000 &
sudo iperf3 -c -i 1 -t 3600 -p 3005  -b 1000000000000 &
sudo iperf3 -c -i 1 -t 3600 -p 3006  -b 1000000000000 &
sudo iperf3 -c -i 1 -t 3600 -p 3007  -b 1000000000000 &
Figure 43 – Running the iperf 3 background traffic generation in both Azure and AWS.

Pipeline software

  • From “Home”, select all 3 pipelines and click the “Play” icon to start the pipelines.
Figure 44 – Starting the Kafka pipelines.
  • Select a pipeline to see the Summary statistics
Figure 45 – Displaying pipeline summary statistics.