-
Notifications
You must be signed in to change notification settings - Fork 151
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Add support for "no render" examples, refactor the kafkametrics recei…
…ver example (#886)
- Loading branch information
Showing
22 changed files
with
273 additions
and
227 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.
Oops, something went wrong.
Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,65 @@ | ||
# Example of Chart Configuration | ||
|
||
## Kafka Metrics Receiver | ||
The [Kafka metrics receiver](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/kafkametricsreceiver) | ||
collects Kafka metrics (brokers, topics, partitions, consumer groups) from kafka server, converting into otlp. | ||
|
||
## How to deploy Zookeeper and Kafka with collector monitoring | ||
|
||
### 1. Deploying Zookeeper and Kafka | ||
|
||
Use the following command to deploy Kafka and Zookeeper to your Kubernetes cluster: | ||
|
||
```bash | ||
curl https://raw.githubusercontent.com/signalfx/splunk-otel-collector-chart/main/examples/add-kafkametrics-receiver/kafka.yaml | kubectl apply -f - | ||
``` | ||
|
||
### 2. Configuring the Kafka Metrics Receiver | ||
|
||
Update your values.yaml file with the following configuration to set up the kafkametricsreceiver: | ||
|
||
```yaml | ||
agent: | ||
config: | ||
receivers: | ||
kafkametrics: | ||
brokers: kafka-service.kafka.svc.cluster.local:9092 | ||
protocol_version: 2.0.0 | ||
scrapers: | ||
- brokers | ||
- topics | ||
- consumers | ||
service: | ||
pipelines: | ||
metrics: | ||
receivers: [ kafkametrics ] | ||
``` | ||
### 3. Installing the Splunk OTel Collector Chart | ||
With the configuration in place, deploy the Splunk OTel Collector using Helm: | ||
```bash | ||
helm install my-splunk-otel-collector --values my_values.yaml splunk-otel-collector-chart/splunk-otel-collector | ||
``` | ||
|
||
### 4. Check out the results at [Splunk Observability](https://app.signalfx.com/#/metrics) | ||
|
||
You can now view Kafka metrics in Splunk Observability. | ||
|
||
## Checking the status and logs of the Kafka demo | ||
|
||
Here are `kubectl` commands for checking the status and logs of the provided Kubernetes objects: | ||
|
||
```bash | ||
kubectl get deployment zookeeper-deployment -n kafka | ||
kubectl logs -l app=zookeeper -n kafka | ||
kubectl get deployment kafka-deployment -n kafka | ||
kubectl logs -l app=kafka -n kafka | ||
kubectl get service zookeeper-service -n kafka | ||
kubectl get service kafka-service -n kafka | ||
kubectl get cronjob kafka-producer-cronjob -n kafka | ||
kubectl logs -l job-name=kafka-producer-cronjob -n kafka | ||
kubectl get job kafka-consumer-job -n kafka | ||
kubectl logs -l job-name=kafka-consumer-job -n kafka | ||
``` |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,136 @@ | ||
apiVersion: apps/v1 | ||
kind: Deployment | ||
metadata: | ||
name: zookeeper-deployment | ||
labels: | ||
app: zookeeper | ||
spec: | ||
replicas: 1 | ||
selector: | ||
matchLabels: | ||
app: zookeeper | ||
template: | ||
metadata: | ||
labels: | ||
app: zookeeper | ||
spec: | ||
containers: | ||
- name: zookeeper | ||
image: confluentinc/cp-zookeeper:latest | ||
ports: | ||
- containerPort: 2181 | ||
env: | ||
- name: ZOOKEEPER_CLIENT_PORT | ||
value: "2181" | ||
|
||
--- | ||
kind: Deployment | ||
apiVersion: apps/v1 | ||
metadata: | ||
name: kafka-deployment | ||
labels: | ||
app: kafka | ||
spec: | ||
replicas: 1 | ||
selector: | ||
matchLabels: | ||
app: kafka | ||
template: | ||
metadata: | ||
labels: | ||
app: kafka | ||
spec: | ||
containers: | ||
- name: broker | ||
image: confluentinc/cp-kafka:latest | ||
ports: | ||
- containerPort: 9092 | ||
env: | ||
- name: KAFKA_BROKER_ID | ||
value: "1" | ||
- name: KAFKA_ZOOKEEPER_CONNECT | ||
value: 'zookeeper-service:2181' | ||
- name: KAFKA_LISTENER_SECURITY_PROTOCOL_MAP | ||
value: PLAINTEXT:PLAINTEXT,PLAINTEXT_INTERNAL:PLAINTEXT | ||
- name: KAFKA_ADVERTISED_LISTENERS | ||
value: PLAINTEXT://:29092,PLAINTEXT_INTERNAL://kafka-service:9092 | ||
|
||
--- | ||
apiVersion: v1 | ||
kind: Service | ||
metadata: | ||
name: zookeeper-service | ||
spec: | ||
selector: | ||
app: zookeeper | ||
ports: | ||
- protocol: TCP | ||
port: 2181 | ||
targetPort: 2181 | ||
|
||
--- | ||
apiVersion: v1 | ||
kind: Service | ||
metadata: | ||
name: kafka-service | ||
spec: | ||
selector: | ||
app: kafka | ||
ports: | ||
- protocol: TCP | ||
port: 9092 | ||
targetPort: 9092 | ||
|
||
--- | ||
apiVersion: batch/v1 | ||
kind: Job | ||
metadata: | ||
name: kafka-producer-job | ||
spec: | ||
template: | ||
spec: | ||
containers: | ||
- name: kafka-producer | ||
image: confluentinc/cp-kafka:latest | ||
command: ["/bin/sh", "-c"] | ||
args: | ||
- > | ||
while true; do | ||
# Check service availability | ||
if nc -zv kafka-service.kafka.svc.cluster.local 9092; then | ||
# Check and create topic if not exists | ||
kafka-topics --list --bootstrap-server kafka-service.kafka.svc.cluster.local:9092 | grep demo-topic-name || kafka-topics --create --bootstrap-server kafka-service.kafka.svc.cluster.local:9092 --replication-factor 1 --partitions 1 --topic demo-topic-name; | ||
# Produce message (assuming topic is available) | ||
echo 'This is a demo Kafka Message that generates every 5s' | kafka-console-producer --broker-list kafka-service.kafka.svc.cluster.local:9092 --topic demo-topic-name; | ||
else | ||
echo 'Waiting Kafka service availability...'; | ||
fi | ||
sleep 5; | ||
done | ||
restartPolicy: OnFailure | ||
|
||
--- | ||
apiVersion: batch/v1 | ||
kind: Job | ||
metadata: | ||
name: kafka-consumer-job | ||
spec: | ||
template: | ||
spec: | ||
containers: | ||
- name: kafka-consumer | ||
image: confluentinc/cp-kafka:latest | ||
command: ["/bin/sh", "-c"] | ||
args: | ||
- | | ||
# Wait until the topic is ready to be consumed | ||
until kafka-topics --list --bootstrap-server kafka-service.kafka.svc.cluster.local:9092 | grep demo-topic-name; do | ||
echo 'Waiting for topic to be ready...'; | ||
sleep 5; | ||
done | ||
# Start consuming messages indefinitely | ||
kafka-console-consumer --bootstrap-server kafka-service.kafka.svc.cluster.local:9092 --topic demo-topic-name | ||
restartPolicy: OnFailure | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
Oops, something went wrong.