Skip to content
This repository has been archived by the owner on Mar 20, 2024. It is now read-only.

Pod Requiring Privilege in order to run cndpfwd application in Kubernetes. #93

Open
manish22m2110 opened this issue Feb 18, 2024 · 3 comments

Comments

@manish22m2110
Copy link

image

I was trying out afxdp-plugin with cndp to deploy sample application in Kubernetes. I faced the following error inside the pod.
image

When I add NET_ADMIN and SYS_ADMIN then it works without any issue but I thought we did not require any privilege to run the pod. Can you please help me out here.
image

These are the yaml files I have used.

POD.YAML

apiVersion: v1
kind: Pod
metadata:
  name: cndp-0-0
  annotations:
    k8s.v1.cni.cncf.io/networks: cndp-cni-afxdp0
spec:
  volumes:
  - name: shared-data
    emptyDir: {}
  - name: unixsock
    hostPath:
      path: /tmp/afxdp_dp/
  containers:
    - name: cndp-0
      command: 
      - sleep
      - inf
      image: cndp
      imagePullPolicy: Never
      securityContext:
        capabilities:
          add:
            - NET_RAW
            - IPC_LOCK
            - NET_ADMIN
            - SYS_ADMIN
      ports:
      - containerPort: 8094
        hostPort: 8094
      resources:
        requests:
          afxdp/pool1: '1'
        limits:
          afxdp/pool1: '1'
          hugepages-2Mi: 512Mi
          memory: 2Gi
      volumeMounts:
        - name: shared-data
          mountPath: /var/run/cndp/
        - name: unixsock
          mountPath: /tmp/afxdp_dp/

NAD.YAML

apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
  name: cndp-cni-afxdp0
  annotations:
    k8s.v1.cni.cncf.io/resourceName: afxdp/pool1
spec:
  config: '{
      "cniVersion": "0.3.0",
      "type": "afxdp",
      "mode": "primary",
      "queues": "1",
      "logLevel": "debug",
      "ipam": {
        "type": "host-local",
        "subnet": "192.168.1.0/24",
        "rangeStart": "192.168.1.200",
        "rangeEnd": "192.168.1.216",
        "routes": [
          { "dst": "0.0.0.0/0" }
        ],
        "gateway": "192.168.1.1"
      }
    }

DAEMONSET.YAML

apiVersion: v1
kind: ConfigMap
metadata:
  name: afxdp-dp-config
  namespace: kube-system
data:
  config.json: |
    {
      "clusterType": "physical",
      "mode": "primary",
      "logLevel": "debug",
      "pools":[
          {
             "name":"pool1",
             "mode":"primary",
              "udsTimeout":300,
             "drivers":[
                {
                   "name":"i40e"
                },
                {
                   "name":"ice"
                }
             ]
          }
       ]
    }
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: afxdp-device-plugin
  namespace: kube-system
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-afxdp-device-plugin
  namespace: kube-system
  labels:
    tier: node
    app: afxdp
spec:
  selector:
    matchLabels:
      name: afxdp-device-plugin
  template:
    metadata:
      labels:
        name: afxdp-device-plugin
        tier: node
        app: afxdp
    spec:
      hostNetwork: true
      nodeSelector:
        kubernetes.io/arch: amd64
      tolerations:
        - key: node-role.kubernetes.io/master
          operator: Exists
          effect: NoSchedule
      serviceAccountName: afxdp-device-plugin
      containers:
        - name: kube-afxdp
          image: intel/afxdp-plugins-for-kubernetes:latest
          imagePullPolicy: IfNotPresent
          securityContext:
            capabilities:
              drop:
                - all
              add:
                - SYS_ADMIN
                - NET_ADMIN
          resources:
            requests:
              cpu: "250m"
              memory: "40Mi"
            limits:
              cpu: "1"
              memory: "200Mi"
          volumeMounts:
            - name: unixsock
              mountPath: /tmp/afxdp_dp/
            - name: bpfmappinning
              mountPath: /var/run/afxdp_dp/
            - name: devicesock
              mountPath: /var/lib/kubelet/device-plugins/
            - name: resources
              mountPath: /var/lib/kubelet/pod-resources/
            - name: config-volume
              mountPath: /afxdp/config
            - name: log
              mountPath: /var/log/afxdp-k8s-plugins/
            - name: cnibin
              mountPath: /opt/cni/bin/
      volumes:
        - name: unixsock
          hostPath:
            path: /tmp/afxdp_dp/
        - name: bpfmappinning
          hostPath:
            path: /var/run/afxdp_dp/
        - name: devicesock
          hostPath:
            path: /var/lib/kubelet/device-plugins/
        - name: resources
          hostPath:
            path: /var/lib/kubelet/pod-resources/
        - name: config-volume
          configMap:
            name: afxdp-dp-config
            items:
              - key: config.json
                path: config.json
        - name: log
          hostPath:
            path: /var/log/afxdp-k8s-plugins/
        - name: cnibin
          hostPath:
            path: /opt/cni/bin/
@maryamtahhan
Copy link
Contributor

What is the Kernel on the host? Prior to kernel 5.19, all BPF sys calls required CAP_BPF, which are used to access maps shared between the BFP program and the userspace program. In kernel 5.19, a change went in that only requires CAP_BPF for map creation (BPF_MAP_CREATE) and loading programs (BPF_PROG_LOAD).

What is the value of kernel.unprivileged_bpf_disabled on your OS?

Is this a KinD cluster or something else?

@manish22m2110
Copy link
Author

manish22m2110 commented Feb 19, 2024

Okay, so I am using kernel version 5.15.0-25-generic. The value of kernel.unprivileged_bpf_disabled is 2. I guess thats why I was facing the above error. I will recheck by upgrading to kernel version 5.19.

update: I tried by setting the unprivileged_bpf_disabled flag and it works.

Thank you.

@maryamtahhan
Copy link
Contributor

Yeah so for Kernel 5.15.0-25-generic you would need CAP_BPF...
Please do try the 5.19 Kernel and let me know if you still have issues.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants