-
Notifications
You must be signed in to change notification settings - Fork 103
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Empty YAML documents interpreted as "Invalid input" #120
Comments
Thanks for the post @sheisnicola I'll make sure empty yaml in a multi-yaml file doesn't cause it to fail |
Ok I tried YAML---
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
run: nginx
name: nginx
spec:
replicas: 1
selector:
matchLabels:
run: nginx
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
run: nginx
spec:
containers:
- image: nginx
name: nginx
resources: {}
status: {}
---
---
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
run: nginx
name: nginx2
spec:
replicas: 1
selector:
matchLabels:
run: nginx
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
run: nginx
spec:
containers:
- image: nginx
name: nginx2
resources: {}
status: {} Resultexit code: [
{
"object": "Deployment/nginx.default",
"valid": true,
"message": "Passed with a score of 0 points",
"score": 0,
"scoring": {
"advise": [
{
"selector": ".metadata .annotations .\"container.apparmor.security.beta.kubernetes.io/nginx\"",
"reason": "Well defined AppArmor policies may provide greater protection from unknown threats. WARNING: NOT PRODUCTION READY",
"points": 3
},
{
"selector": ".spec .serviceAccountName",
"reason": "Service accounts restrict Kubernetes API access and should be configured with least privilege",
"points": 3
},
{
"selector": ".metadata .annotations .\"container.seccomp.security.alpha.kubernetes.io/pod\"",
"reason": "Seccomp profiles set minimum privilege and secure against unknown threats",
"points": 1
},
{
"selector": "containers[] .resources .limits .cpu",
"reason": "Enforcing CPU limits prevents DOS via resource exhaustion",
"points": 1
},
{
"selector": "containers[] .resources .limits .memory",
"reason": "Enforcing memory limits prevents DOS via resource exhaustion",
"points": 1
},
{
"selector": "containers[] .resources .requests .cpu",
"reason": "Enforcing CPU requests aids a fair balancing of resources across the cluster",
"points": 1
},
{
"selector": "containers[] .resources .requests .memory",
"reason": "Enforcing memory requests aids a fair balancing of resources across the cluster",
"points": 1
},
{
"selector": "containers[] .securityContext .capabilities .drop",
"reason": "Reducing kernel capabilities available to a container limits its attack surface",
"points": 1
},
{
"selector": "containers[] .securityContext .capabilities .drop | index(\"ALL\")",
"reason": "Drop all capabilities and add only those required to reduce syscall attack surface",
"points": 1
},
{
"selector": "containers[] .securityContext .readOnlyRootFilesystem == true",
"reason": "An immutable root filesystem can prevent malicious binaries being added to PATH and increase attack cost",
"points": 1
},
{
"selector": "containers[] .securityContext .runAsNonRoot == true",
"reason": "Force the running image to run as a non-root user to ensure least privilege",
"points": 1
},
{
"selector": "containers[] .securityContext .runAsUser -gt 10000",
"reason": "Run as a high-UID user to avoid conflicts with the host's user table",
"points": 1
}
]
}
},
{
"object": "Unknown",
"valid": false,
"message": "This resource is invalid, Kubernetes kind not found",
"score": 0,
"scoring": {}
},
{
"object": "Deployment/nginx2.default",
"valid": true,
"message": "Passed with a score of 0 points",
"score": 0,
"scoring": {
"advise": [
{
"selector": ".metadata .annotations .\"container.apparmor.security.beta.kubernetes.io/nginx\"",
"reason": "Well defined AppArmor policies may provide greater protection from unknown threats. WARNING: NOT PRODUCTION READY",
"points": 3
},
{
"selector": ".spec .serviceAccountName",
"reason": "Service accounts restrict Kubernetes API access and should be configured with least privilege",
"points": 3
},
{
"selector": ".metadata .annotations .\"container.seccomp.security.alpha.kubernetes.io/pod\"",
"reason": "Seccomp profiles set minimum privilege and secure against unknown threats",
"points": 1
},
{
"selector": "containers[] .resources .limits .cpu",
"reason": "Enforcing CPU limits prevents DOS via resource exhaustion",
"points": 1
},
{
"selector": "containers[] .resources .limits .memory",
"reason": "Enforcing memory limits prevents DOS via resource exhaustion",
"points": 1
},
{
"selector": "containers[] .resources .requests .cpu",
"reason": "Enforcing CPU requests aids a fair balancing of resources across the cluster",
"points": 1
},
{
"selector": "containers[] .resources .requests .memory",
"reason": "Enforcing memory requests aids a fair balancing of resources across the cluster",
"points": 1
},
{
"selector": "containers[] .securityContext .capabilities .drop",
"reason": "Reducing kernel capabilities available to a container limits its attack surface",
"points": 1
},
{
"selector": "containers[] .securityContext .capabilities .drop | index(\"ALL\")",
"reason": "Drop all capabilities and add only those required to reduce syscall attack surface",
"points": 1
},
{
"selector": "containers[] .securityContext .readOnlyRootFilesystem == true",
"reason": "An immutable root filesystem can prevent malicious binaries being added to PATH and increase attack cost",
"points": 1
},
{
"selector": "containers[] .securityContext .runAsNonRoot == true",
"reason": "Force the running image to run as a non-root user to ensure least privilege",
"points": 1
},
{
"selector": "containers[] .securityContext .runAsUser -gt 10000",
"reason": "Run as a high-UID user to avoid conflicts with the host's user table",
"points": 1
}
]
}
}
] I tried it with the empty block at the top, with the empty block as no spaces, mutliple empty blocks next to eachother and it was fine. Can you post your I can only recreate the |
Version $ docker run --rm docker.io/kubesec/kubesec:v2 version
version unknown
git commit unknown
build date unknown
$ docker inspect docker.io/kubesec/kubesec:v2
[
{
"Id": "sha256:d1c3258f9857846e315411e2b703c115dd4d55a31c24b0db1effff9a55cb3df1",
"RepoTags": [
"kubesec/kubesec:v2"
],
"RepoDigests": [
"kubesec/kubesec@sha256:a0292096fd5739e13ee9dd63630bbebca8b425818056bc690bace572ec63b7d1"
],
"Parent": "",
"Comment": "",
"Created": "2019-11-07T13:44:34.437742673Z",
"Container": "681521b8bb015829561173e19d9a7031ca53a425cdecb6e946b21cddb5bf730d",
"ContainerConfig": {
"Hostname": "681521b8bb01",
"Domainname": "",
"User": "app",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": [
"/bin/sh",
"-c",
"#(nop) ",
"CMD [\"http\" \"8080\"]"
],
"ArgsEscaped": true,
"Image": "sha256:8a2525cecdf7c341ceee4aed59f48ec9ae8e92fbb4f8e9cd0b3ff9f9c8f32c15",
"Volumes": null,
"WorkingDir": "/home/app",
"Entrypoint": [
"./kubesec"
],
"OnBuild": null,
"Labels": {}
},
"DockerVersion": "18.09.3",
"Author": "",
"Config": {
"Hostname": "",
"Domainname": "",
"User": "app",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": [
"http",
"8080"
],
"ArgsEscaped": true,
"Image": "sha256:8a2525cecdf7c341ceee4aed59f48ec9ae8e92fbb4f8e9cd0b3ff9f9c8f32c15",
"Volumes": null,
"WorkingDir": "/home/app",
"Entrypoint": [
"./kubesec"
],
"OnBuild": null,
"Labels": null
},
"Architecture": "amd64",
"Os": "linux",
"Size": 119086993,
"VirtualSize": 119086993,
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/0a6d149d0da7737fb1ff7daf7411e00480fdf6f6a9545b491cb8fed4c2e7d796/diff:/var/lib/docker/overlay2/15ba35d07de749bdd70d96536542f07f5f41c10afba8f11de05ef7794f71b784/diff:/var/lib/docker/overlay2/cda6e3cb06fe698946de9a514fcca17cff809c41cf6f86e2c9e96ed64c517d60/diff:/var/lib/docker/overlay2/31876c6cf6f2c8ca010c0f3bca704cb0aef4842d6e6a9bcec1e6457e7234afc4/diff",
"MergedDir": "/var/lib/docker/overlay2/7ecf3de7949e42740b63b99d950b0b0533c52f9803d42dd403712784dd26204e/merged",
"UpperDir": "/var/lib/docker/overlay2/7ecf3de7949e42740b63b99d950b0b0533c52f9803d42dd403712784dd26204e/diff",
"WorkDir": "/var/lib/docker/overlay2/7ecf3de7949e42740b63b99d950b0b0533c52f9803d42dd403712784dd26204e/work"
},
"Name": "overlay2"
},
"RootFS": {
"Type": "layers",
"Layers": [
"sha256:d9ff549177a94a413c425ffe14ae1cc0aa254bc9c7df781add08e7d2fba25d27",
"sha256:b93a5807156e629d83386a5cf12753aeabaa31e5300d7e04a6a68f03d58a720a",
"sha256:b240df2433d1154eb4a01e03172d18a468a16d6699ec0b379fe7f10bafad2cab",
"sha256:5849d47f86f7f0684a42b77db7e3898571257f5386c4c7902f8b0411dbffb6c9",
"sha256:5ff92a441d9805f35d34f42df57400b06e0b5bb228bb2b0126ff454c425cde60"
]
},
"Metadata": {
"LastTagTime": "0001-01-01T00:00:00Z"
}
}
] Tweaking your example slightly yields the same results that I experience: Example2 $ cat example2.yaml
---
---
---
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
run: nginx
name: nginx
spec:
replicas: 1
selector:
matchLabels:
run: nginx
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
run: nginx
spec:
containers:
- image: nginx
name: nginx
resources: {}
status: {}
---
---
---
---
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
run: nginx
name: nginx2
spec:
replicas: 1
selector:
matchLabels:
run: nginx
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
run: nginx
spec:
containers:
- image: nginx
name: nginx2
resources: {}
status: {}
$ cat example2.yaml | docker run --rm -i docker.io/kubesec/kubesec:v2 scan --debug /dev/stdin
Invalid input
$ echo $?
1 ...or a similar result, but not quite as bad...: Example1 $ cat example1.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
run: nginx
name: nginx
spec:
replicas: 1
selector:
matchLabels:
run: nginx
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
run: nginx
spec:
containers:
- image: nginx
name: nginx
resources: {}
status: {}
---
---
---
---
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
run: nginx
name: nginx2
spec:
replicas: 1
selector:
matchLabels:
run: nginx
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
run: nginx
spec:
containers:
- image: nginx
name: nginx2
resources: {}
status: {}
$ cat example1.yaml | docker run --rm -i docker.io/kubesec/kubesec:v2 scan --debug /dev/stdin
2020-10-13T11:29:21.712Z DEBUG ruler/ruleset.go:340 positive score rule failed containers[] .resources .limits .memory (1 points)
2020-10-13T11:29:21.712Z DEBUG ruler/ruleset.go:340 positive score rule failed .spec .serviceAccountName (3 points)
2020-10-13T11:29:21.712Z DEBUG ruler/ruleset.go:340 positive score rule failed .metadata .annotations ."container.seccomp.security.alpha.kubernetes.io/pod" (1 points)
2020-10-13T11:29:21.712Z DEBUG ruler/ruleset.go:340 positive score rule failed containers[] .resources .requests .memory (1 points)
2020-10-13T11:29:21.712Z DEBUG ruler/ruleset.go:340 positive score rule failed containers[] .resources .limits .cpu (1 points)
2020-10-13T11:29:21.712Z DEBUG ruler/ruleset.go:340 positive score rule failed containers[] .securityContext .capabilities .drop (1 points)
2020-10-13T11:29:21.712Z DEBUG ruler/ruleset.go:340 positive score rule failed containers[] .resources .requests .cpu (1 points)
2020-10-13T11:29:21.712Z DEBUG ruler/ruleset.go:340 positive score rule failed containers[] .securityContext .runAsUser -gt 10000 (1 points)
2020-10-13T11:29:21.712Z DEBUG ruler/ruleset.go:340 positive score rule failed .metadata .annotations ."container.apparmor.security.beta.kubernetes.io/nginx" (3 points)
Invalid input
2020-10-13T11:29:21.713Z DEBUG ruler/ruleset.go:340 positive score rule failed containers[] .securityContext .capabilities .drop | index("ALL") (1 points)
2020-10-13T11:29:21.713Z DEBUG ruler/ruleset.go:340 positive score rule failed containers[] .securityContext .runAsNonRoot == true (1 points)
2020-10-13T11:29:21.713Z DEBUG ruler/ruleset.go:340 positive score rule failed containers[] .securityContext .readOnlyRootFilesystem == true (1 points)
$ echo $?
1 |
Thanks Playing with it I ended up with these 3 cases: Works ---
---
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 1
selector:
matchLabels:
run: nginx
template:
metadata:
labels:
run: nginx
spec:
containers:
- image: nginx
name: nginx Fails ---
---
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 1
selector:
matchLabels:
run: nginx
template:
metadata:
labels:
run: nginx
spec:
containers:
- image: nginx
name: nginx Works again... ---
---
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 1
selector:
matchLabels:
run: nginx
template:
metadata:
labels:
run: nginx
spec:
containers:
- image: nginx
name: nginx Having an example to test things against is great. |
Resolves #120 The splitting leads to some being completely empty strings This replaces them with the correct value. Just a side effect of current required splitting solution.
Resolves #120 The splitting leads to some being completely empty strings. Just a side effect of currently required splitting solution. This will ignore all empty or just header yaml.
Resolves #120 The splitting leads to some being completely empty strings This replaces them with the correct value. Just a side effect of current required splitting solution.
Resolves #120 The splitting leads to some being completely empty strings. Just a side effect of currently required splitting solution. This will ignore all empty or just header yaml.
Resolves #120 The splitting leads to some being completely empty strings This replaces them with the correct value. Just a side effect of current required splitting solution.
Let me know if 2.8.0 fixed your issue, it was fine with the test cases I tried. |
We use Helm to render K8s configuration which appears to emit empty YAML documents in the final output (depending upon conditionals in our chart templates). For example we might see this in the output:
kubectl
is perfectly happy to consume this configuration, butkubesec
gets in a fluster. We've had to resort to post-processing the YAML before piping it intokubesec
, which seems a little heavy-handed.The text was updated successfully, but these errors were encountered: