Initial commit

This commit is contained in:
2025-12-09 19:34:54 +11:00
commit a4d98eea50
894 changed files with 131646 additions and 0 deletions

29
charts/alloy/.helmignore Normal file
View File

@@ -0,0 +1,29 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/
# Don't package templates.
README.md.gotmpl
# Don't packages the tests used for CI.
/tests/

245
charts/alloy/CHANGELOG.md Normal file
View File

@@ -0,0 +1,245 @@
# Changelog
> _Contributors should read our [contributors guide][] for instructions on how
> to update the changelog._
This document contains a historical list of changes between releases. Only
changes that impact end-user behavior are listed; changes to documentation or
internal API changes are not present.
Unreleased
----------
1.0.1 (2025-04-10)
----------
### Enhancements
- Update to Grafana Alloy v1.8.1. (@dehaansa)
- Update default configreloader resources to match what is set in prometheus-operator project (@dehaansa)
- Add Vertical Pod Autoscaler support (@QuentinBisson)
1.0.0 (2025-04-09)
----------
### Enhancements
- Update version to `1.0.0`. This Helm chart is now covered with the [backward-compatibility](https://grafana.com/docs/alloy/latest/introduction/backward-compatibility/) policy.
- Update to Grafana Alloy v1.8.0. (@thampiotr)
0.12.6 (2025-04-03)
----------
### Breaking changes
- configReloader.customArgs are likely to break as the prometheus maintained config reloader does not have the same arguments as the previous image (@dehaansa)
### Enhancements
- Change configReloader from jimmydyson/configmap-reload to prometheus-operator/prometheus-config-reloader (@dehaansa)
- Update to Grafana Alloy v1.7.5. (@kimxogus)
- Add `checksum/config` pod annotation (@kimxogus)
### Other changes
- Fix typo in values.yaml documentation (@petewall)
0.12.5 (2025-03-13)
----------
### Enhancements
- Update to Grafana Alloy v1.7.4. (@dehaansa)
0.12.4 (2025-03-13)
----------
### Enhancements
- Update to Grafana Alloy v1.7.3. (@dehaansa)
0.12.3 (2025-03-10)
----------
### Enhancements
- Add support for adding livenessProbe to agent container (@slimes28)
0.12.2 (2025-03-10)
----------
### Bug Fixes
- Set resource namespace correctly (@shinebayar-g)
### Enhancements
- Add a new `automountServiceAccountToken` configuration value for `serviceAccount`. (@ptodev)
- Update to Grafana Alloy v1.7.2. (@thampiotr)
0.12.1 (2025-02-26)
----------
### Enhancements
- Update to Grafana Alloy v1.7.1. (@thampiotr)
0.12.0 (2025-02-24)
----------
### Enhancements
- Update to Grafana Alloy v1.7.0. (@thampiotr)
0.11.0 (2025-01-23)
----------
### Enhancements
- Update jimmidyson/configmap-reload to 0.14.0. (@petewall)
- Add the ability to deploy extra manifest files. (@dbluxo)
0.10.1 (2024-12-03)
----------
### Enhancements
- Update to Grafana Alloy v1.5.1. (@ptodev)
0.10.0 (2024-11-13)
----------
### Enhancements
- Add support for adding hostAliases to the Helm chart. (@duncan485)
- Update to Grafana Alloy v1.5.0. (@thampiotr)
0.9.2 (2024-10-18)
------------------
### Enhancements
- Update to Grafana Alloy v1.4.3. (@ptodev)
0.9.1 (2024-10-04)
------------------
### Enhancements
- Update to Grafana Alloy v1.4.2. (@ptodev)
0.9.0 (2024-10-02)
------------------
### Enhancements
- Add lifecyle hook to the Helm chart. (@etiennep)
- Add terminationGracePeriodSeconds setting to the Helm chart. (@etiennep)
0.8.1 (2024-09-26)
------------------
### Enhancements
- Update to Grafana Alloy v1.4.1. (@ptodev)
0.8.0 (2024-09-25)
------------------
### Enhancements
- Update to Grafana Alloy v1.4.0. (@ptodev)
0.7.0 (2024-08-26)
------------------
### Enhancements
- Add PodDisruptionBudget to the Helm chart. (@itspouya)
0.6.1 (2024-08-23)
----------
### Enhancements
- Add the ability to set --cluster.name in the Helm chart with alloy.clustering.name. (@petewall)
- Add the ability to set appProtocol in extraPorts to help OpenShift users to expose gRPC. (@clementduveau)
### Other changes
- Update helm chart to use v1.3.1.
0.6.0 (2024-08-05)
------------------
### Other changes
- Update helm chart to use v1.3.0.
- Set `publishNotReadyAddresses` to `true` in the service spec for clustering to fix a bug where peers could not join on startup. (@wildum)
0.5.1 (2023-07-11)
------------------
### Other changes
- Update helm chart to use v1.2.1.
0.5.0 (2024-07-08)
------------------
### Enhancements
- Only utilize spec.internalTrafficPolicy in the Service if deploying to Kubernetes 1.26 or later. (@petewall)
0.4.0 (2024-06-26)
------------------
### Enhancements
- Update to Grafana Alloy v1.2.0. (@ptodev)
0.3.2 (2024-05-30)
------------------
### Bugfixes
- Update to Grafana Alloy v1.1.1. (@rfratto)
0.3.1 (2024-05-22)
------------------
### Bugfixes
- Fix clustering on instances running within Istio mesh by allowing to change the name of the clustering port
0.3.0 (2024-05-14)
------------------
### Enhancements
- Update to Grafana Alloy v1.1.0. (@rfratto)
0.2.0 (2024-05-08)
------------------
### Other changes
- Support all [Kubernetes recommended labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/) (@nlamirault)
0.1.1 (2024-04-11)
------------------
### Other changes
- Add missing Alloy icon to Chart.yaml. (@rfratto)
0.1.0 (2024-04-09)
------------------
### Features
- Introduce a Grafana Alloy Helm chart. The Grafana Alloy Helm chart is
backwards compatibile with the values.yaml from the `grafana-agent` Helm
chart. Review the Helm chart README for a description on how to migrate.
(@rfratto)

6
charts/alloy/Chart.lock Normal file
View File

@@ -0,0 +1,6 @@
dependencies:
- name: crds
repository: ""
version: 0.0.0
digest: sha256:1980431a3d80822fca2e67e9cf16ff7a7f8d1dc87deb9e44d50e85e3e8e33a81
generated: "2025-04-11T09:30:48.378858526Z"

12
charts/alloy/Chart.yaml Normal file
View File

@@ -0,0 +1,12 @@
apiVersion: v2
appVersion: v1.8.1
dependencies:
- condition: crds.create
name: crds
repository: ""
version: 0.0.0
description: Grafana Alloy
icon: https://raw.githubusercontent.com/grafana/alloy/main/docs/sources/assets/alloy_icon_orange.svg
name: alloy
type: application
version: 1.0.1

342
charts/alloy/README.md Normal file
View File

@@ -0,0 +1,342 @@
# Grafana Alloy Helm chart
![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![Version: 1.0.1](https://img.shields.io/badge/Version-1.0.1-informational?style=flat-square) ![AppVersion: v1.8.1](https://img.shields.io/badge/AppVersion-v1.8.1-informational?style=flat-square)
Helm chart for deploying [Grafana Alloy][] to Kubernetes.
[Grafana Alloy]: https://grafana.com/docs/alloy/latest/
## Usage
### Setup Grafana chart repository
```
helm repo add grafana https://grafana.github.io/helm-charts
helm repo update
```
### Install chart
To install the chart with the release name my-release:
`helm install my-release grafana/alloy`
This chart installs one instance of Grafana Alloy into your Kubernetes cluster
using a specific Kubernetes controller. By default, DaemonSet is used. The
`controller.type` value can be used to change the controller to either a
StatefulSet or Deployment.
Creating multiple installations of the Helm chart with different controllers is
useful if just using the default DaemonSet isn't sufficient.
## Values
| Key | Type | Default | Description |
|-----|------|---------|-------------|
| alloy.clustering.enabled | bool | `false` | Deploy Alloy in a cluster to allow for load distribution. |
| alloy.clustering.name | string | `""` | Name for the Alloy cluster. Used for differentiating between clusters. |
| alloy.clustering.portName | string | `"http"` | Name for the port used for clustering, useful if running inside an Istio Mesh |
| alloy.configMap.content | string | `""` | Content to assign to the new ConfigMap. This is passed into `tpl` allowing for templating from values. |
| alloy.configMap.create | bool | `true` | Create a new ConfigMap for the config file. |
| alloy.configMap.key | string | `nil` | Key in ConfigMap to get config from. |
| alloy.configMap.name | string | `nil` | Name of existing ConfigMap to use. Used when create is false. |
| alloy.enableReporting | bool | `true` | Enables sending Grafana Labs anonymous usage stats to help improve Grafana Alloy. |
| alloy.envFrom | list | `[]` | Maps all the keys on a ConfigMap or Secret as environment variables. https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#envfromsource-v1-core |
| alloy.extraArgs | list | `[]` | Extra args to pass to `alloy run`: https://grafana.com/docs/alloy/latest/reference/cli/run/ |
| alloy.extraEnv | list | `[]` | Extra environment variables to pass to the Alloy container. |
| alloy.extraPorts | list | `[]` | Extra ports to expose on the Alloy container. |
| alloy.hostAliases | list | `[]` | Host aliases to add to the Alloy container. |
| alloy.lifecycle | object | `{}` | Set lifecycle hooks for the Grafana Alloy container. |
| alloy.listenAddr | string | `"0.0.0.0"` | Address to listen for traffic on. 0.0.0.0 exposes the UI to other containers. |
| alloy.listenPort | int | `12345` | Port to listen for traffic on. |
| alloy.listenScheme | string | `"HTTP"` | Scheme is needed for readiness probes. If enabling tls in your configs, set to "HTTPS" |
| alloy.livenessProbe | object | `{}` | Set livenessProbe for the Grafana Alloy container. |
| alloy.mounts.dockercontainers | bool | `false` | Mount /var/lib/docker/containers from the host into the container for log collection. |
| alloy.mounts.extra | list | `[]` | Extra volume mounts to add into the Grafana Alloy container. Does not affect the watch container. |
| alloy.mounts.varlog | bool | `false` | Mount /var/log from the host into the container for log collection. |
| alloy.resources | object | `{}` | Resource requests and limits to apply to the Grafana Alloy container. |
| alloy.securityContext | object | `{}` | Security context to apply to the Grafana Alloy container. |
| alloy.stabilityLevel | string | `"generally-available"` | Minimum stability level of components and behavior to enable. Must be one of "experimental", "public-preview", or "generally-available". |
| alloy.storagePath | string | `"/tmp/alloy"` | Path to where Grafana Alloy stores data (for example, the Write-Ahead Log). By default, data is lost between reboots. |
| alloy.uiPathPrefix | string | `"/"` | Base path where the UI is exposed. |
| configReloader.customArgs | list | `[]` | Override the args passed to the container. |
| configReloader.enabled | bool | `true` | Enables automatically reloading when the Alloy config changes. |
| configReloader.image.digest | string | `""` | SHA256 digest of image to use for config reloading (either in format "sha256:XYZ" or "XYZ"). When set, will override `configReloader.image.tag` |
| configReloader.image.registry | string | `"quay.io"` | Config reloader image registry (defaults to docker.io) |
| configReloader.image.repository | string | `"prometheus-operator/prometheus-config-reloader"` | Repository to get config reloader image from. |
| configReloader.image.tag | string | `"v0.81.0"` | Tag of image to use for config reloading. |
| configReloader.resources | object | `{"requests":{"cpu":"10m","memory":"50Mi"}}` | Resource requests and limits to apply to the config reloader container. |
| configReloader.securityContext | object | `{}` | Security context to apply to the Grafana configReloader container. |
| controller.affinity | object | `{}` | Affinity configuration for pods. |
| controller.autoscaling.enabled | bool | `false` | Creates a HorizontalPodAutoscaler for controller type deployment. Deprecated: Please use controller.autoscaling.horizontal instead |
| controller.autoscaling.horizontal | object | `{"enabled":false,"maxReplicas":5,"minReplicas":1,"scaleDown":{"policies":[],"selectPolicy":"Max","stabilizationWindowSeconds":300},"scaleUp":{"policies":[],"selectPolicy":"Max","stabilizationWindowSeconds":0},"targetCPUUtilizationPercentage":0,"targetMemoryUtilizationPercentage":80}` | Configures the Horizontal Pod Autoscaler for the controller. |
| controller.autoscaling.horizontal.enabled | bool | `false` | Enables the Horizontal Pod Autoscaler for the controller. |
| controller.autoscaling.horizontal.maxReplicas | int | `5` | The upper limit for the number of replicas to which the autoscaler can scale up. |
| controller.autoscaling.horizontal.minReplicas | int | `1` | The lower limit for the number of replicas to which the autoscaler can scale down. |
| controller.autoscaling.horizontal.scaleDown.policies | list | `[]` | List of policies to determine the scale-down behavior. |
| controller.autoscaling.horizontal.scaleDown.selectPolicy | string | `"Max"` | Determines which of the provided scaling-down policies to apply if multiple are specified. |
| controller.autoscaling.horizontal.scaleDown.stabilizationWindowSeconds | int | `300` | The duration that the autoscaling mechanism should look back on to make decisions about scaling down. |
| controller.autoscaling.horizontal.scaleUp.policies | list | `[]` | List of policies to determine the scale-up behavior. |
| controller.autoscaling.horizontal.scaleUp.selectPolicy | string | `"Max"` | Determines which of the provided scaling-up policies to apply if multiple are specified. |
| controller.autoscaling.horizontal.scaleUp.stabilizationWindowSeconds | int | `0` | The duration that the autoscaling mechanism should look back on to make decisions about scaling up. |
| controller.autoscaling.horizontal.targetCPUUtilizationPercentage | int | `0` | Average CPU utilization across all relevant pods, a percentage of the requested value of the resource for the pods. Setting `targetCPUUtilizationPercentage` to 0 will disable CPU scaling. |
| controller.autoscaling.horizontal.targetMemoryUtilizationPercentage | int | `80` | Average Memory utilization across all relevant pods, a percentage of the requested value of the resource for the pods. Setting `targetMemoryUtilizationPercentage` to 0 will disable Memory scaling. |
| controller.autoscaling.maxReplicas | int | `5` | The upper limit for the number of replicas to which the autoscaler can scale up. |
| controller.autoscaling.minReplicas | int | `1` | The lower limit for the number of replicas to which the autoscaler can scale down. |
| controller.autoscaling.scaleDown.policies | list | `[]` | List of policies to determine the scale-down behavior. |
| controller.autoscaling.scaleDown.selectPolicy | string | `"Max"` | Determines which of the provided scaling-down policies to apply if multiple are specified. |
| controller.autoscaling.scaleDown.stabilizationWindowSeconds | int | `300` | The duration that the autoscaling mechanism should look back on to make decisions about scaling down. |
| controller.autoscaling.scaleUp.policies | list | `[]` | List of policies to determine the scale-up behavior. |
| controller.autoscaling.scaleUp.selectPolicy | string | `"Max"` | Determines which of the provided scaling-up policies to apply if multiple are specified. |
| controller.autoscaling.scaleUp.stabilizationWindowSeconds | int | `0` | The duration that the autoscaling mechanism should look back on to make decisions about scaling up. |
| controller.autoscaling.targetCPUUtilizationPercentage | int | `0` | Average CPU utilization across all relevant pods, a percentage of the requested value of the resource for the pods. Setting `targetCPUUtilizationPercentage` to 0 will disable CPU scaling. |
| controller.autoscaling.targetMemoryUtilizationPercentage | int | `80` | Average Memory utilization across all relevant pods, a percentage of the requested value of the resource for the pods. Setting `targetMemoryUtilizationPercentage` to 0 will disable Memory scaling. |
| controller.autoscaling.vertical | object | `{"enabled":false,"recommenders":[],"resourcePolicy":{"containerPolicies":[{"containerName":"alloy","controlledResources":["cpu","memory"],"controlledValues":"RequestsAndLimits","maxAllowed":{},"minAllowed":{}}]},"updatePolicy":null}` | Configures the Vertical Pod Autoscaler for the controller. |
| controller.autoscaling.vertical.enabled | bool | `false` | Enables the Vertical Pod Autoscaler for the controller. |
| controller.autoscaling.vertical.recommenders | list | `[]` | List of recommenders to use for the Vertical Pod Autoscaler. Recommenders are responsible for generating recommendation for the object. List should be empty (then the default recommender will generate the recommendation) or contain exactly one recommender. |
| controller.autoscaling.vertical.resourcePolicy | object | `{"containerPolicies":[{"containerName":"alloy","controlledResources":["cpu","memory"],"controlledValues":"RequestsAndLimits","maxAllowed":{},"minAllowed":{}}]}` | Configures the resource policy for the Vertical Pod Autoscaler. |
| controller.autoscaling.vertical.resourcePolicy.containerPolicies | list | `[{"containerName":"alloy","controlledResources":["cpu","memory"],"controlledValues":"RequestsAndLimits","maxAllowed":{},"minAllowed":{}}]` | Configures the container policies for the Vertical Pod Autoscaler. |
| controller.autoscaling.vertical.resourcePolicy.containerPolicies[0].controlledResources | list | `["cpu","memory"]` | The controlled resources for the Vertical Pod Autoscaler. |
| controller.autoscaling.vertical.resourcePolicy.containerPolicies[0].controlledValues | string | `"RequestsAndLimits"` | The controlled values for the Vertical Pod Autoscaler. Needs to be either RequestsOnly or RequestsAndLimits. |
| controller.autoscaling.vertical.resourcePolicy.containerPolicies[0].maxAllowed | object | `{}` | The maximum allowed values for the pods. |
| controller.autoscaling.vertical.resourcePolicy.containerPolicies[0].minAllowed | object | `{}` | Defines the min allowed resources for the pod |
| controller.autoscaling.vertical.updatePolicy | string | `nil` | Configures the update policy for the Vertical Pod Autoscaler. |
| controller.dnsPolicy | string | `"ClusterFirst"` | Configures the DNS policy for the pod. https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy |
| controller.enableStatefulSetAutoDeletePVC | bool | `false` | Whether to enable automatic deletion of stale PVCs due to a scale down operation, when controller.type is 'statefulset'. |
| controller.extraAnnotations | object | `{}` | Annotations to add to controller. |
| controller.extraContainers | list | `[]` | Additional containers to run alongside the Alloy container and initContainers. |
| controller.hostNetwork | bool | `false` | Configures Pods to use the host network. When set to true, the ports that will be used must be specified. |
| controller.hostPID | bool | `false` | Configures Pods to use the host PID namespace. |
| controller.initContainers | list | `[]` | |
| controller.nodeSelector | object | `{}` | nodeSelector to apply to Grafana Alloy pods. |
| controller.parallelRollout | bool | `true` | Whether to deploy pods in parallel. Only used when controller.type is 'statefulset'. |
| controller.podAnnotations | object | `{}` | Extra pod annotations to add. |
| controller.podDisruptionBudget | object | `{"enabled":false,"maxUnavailable":null,"minAvailable":null}` | PodDisruptionBudget configuration. |
| controller.podDisruptionBudget.enabled | bool | `false` | Whether to create a PodDisruptionBudget for the controller. |
| controller.podDisruptionBudget.maxUnavailable | string | `nil` | Maximum number of pods that can be unavailable during a disruption. Note: Only one of minAvailable or maxUnavailable should be set. |
| controller.podDisruptionBudget.minAvailable | string | `nil` | Minimum number of pods that must be available during a disruption. Note: Only one of minAvailable or maxUnavailable should be set. |
| controller.podLabels | object | `{}` | Extra pod labels to add. |
| controller.priorityClassName | string | `""` | priorityClassName to apply to Grafana Alloy pods. |
| controller.replicas | int | `1` | Number of pods to deploy. Ignored when controller.type is 'daemonset'. |
| controller.terminationGracePeriodSeconds | string | `nil` | Termination grace period in seconds for the Grafana Alloy pods. The default value used by Kubernetes if unspecifed is 30 seconds. |
| controller.tolerations | list | `[]` | Tolerations to apply to Grafana Alloy pods. |
| controller.topologySpreadConstraints | list | `[]` | Topology Spread Constraints to apply to Grafana Alloy pods. |
| controller.type | string | `"daemonset"` | Type of controller to use for deploying Grafana Alloy in the cluster. Must be one of 'daemonset', 'deployment', or 'statefulset'. |
| controller.updateStrategy | object | `{}` | Update strategy for updating deployed Pods. |
| controller.volumeClaimTemplates | list | `[]` | volumeClaimTemplates to add when controller.type is 'statefulset'. |
| controller.volumes.extra | list | `[]` | Extra volumes to add to the Grafana Alloy pod. |
| crds.create | bool | `true` | Whether to install CRDs for monitoring. |
| extraObjects | list | `[]` | Extra k8s manifests to deploy |
| fullnameOverride | string | `nil` | Overrides the chart's computed fullname. Used to change the full prefix of resource names. |
| global.image.pullSecrets | list | `[]` | Optional set of global image pull secrets. |
| global.image.registry | string | `""` | Global image registry to use if it needs to be overridden for some specific use cases (e.g local registries, custom images, ...) |
| global.podSecurityContext | object | `{}` | Security context to apply to the Grafana Alloy pod. |
| image.digest | string | `nil` | Grafana Alloy image's SHA256 digest (either in format "sha256:XYZ" or "XYZ"). When set, will override `image.tag`. |
| image.pullPolicy | string | `"IfNotPresent"` | Grafana Alloy image pull policy. |
| image.pullSecrets | list | `[]` | Optional set of image pull secrets. |
| image.registry | string | `"docker.io"` | Grafana Alloy image registry (defaults to docker.io) |
| image.repository | string | `"grafana/alloy"` | Grafana Alloy image repository. |
| image.tag | string | `nil` | Grafana Alloy image tag. When empty, the Chart's appVersion is used. |
| ingress.annotations | object | `{}` | |
| ingress.enabled | bool | `false` | Enables ingress for Alloy (Faro port) |
| ingress.extraPaths | list | `[]` | |
| ingress.faroPort | int | `12347` | |
| ingress.hosts[0] | string | `"chart-example.local"` | |
| ingress.labels | object | `{}` | |
| ingress.path | string | `"/"` | |
| ingress.pathType | string | `"Prefix"` | |
| ingress.tls | list | `[]` | |
| nameOverride | string | `nil` | Overrides the chart's name. Used to change the infix in the resource names. |
| namespaceOverride | string | `nil` | Overrides the chart's namespace. |
| rbac.create | bool | `true` | Whether to create RBAC resources for Alloy. |
| service.annotations | object | `{}` | |
| service.clusterIP | string | `""` | Cluster IP, can be set to None, empty "" or an IP address |
| service.enabled | bool | `true` | Creates a Service for the controller's pods. |
| service.internalTrafficPolicy | string | `"Cluster"` | Value for internal traffic policy. 'Cluster' or 'Local' |
| service.nodePort | int | `31128` | NodePort port. Only takes effect when `service.type: NodePort` |
| service.type | string | `"ClusterIP"` | Service type |
| serviceAccount.additionalLabels | object | `{}` | Additional labels to add to the created service account. |
| serviceAccount.annotations | object | `{}` | Annotations to add to the created service account. |
| serviceAccount.automountServiceAccountToken | bool | `true` | |
| serviceAccount.create | bool | `true` | Whether to create a service account for the Grafana Alloy deployment. |
| serviceAccount.name | string | `nil` | The name of the existing service account to use when serviceAccount.create is false. |
| serviceMonitor.additionalLabels | object | `{}` | Additional labels for the service monitor. |
| serviceMonitor.enabled | bool | `false` | |
| serviceMonitor.interval | string | `""` | Scrape interval. If not set, the Prometheus default scrape interval is used. |
| serviceMonitor.metricRelabelings | list | `[]` | MetricRelabelConfigs to apply to samples after scraping, but before ingestion. ref: https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api.md#relabelconfig |
| serviceMonitor.relabelings | list | `[]` | RelabelConfigs to apply to samples before scraping ref: https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api.md#relabelconfig |
| serviceMonitor.tlsConfig | object | `{}` | Customize tls parameters for the service monitor |
#### Migrate from `grafana/grafana-agent` chart to `grafana/alloy`
The `values.yaml` file for the `grafana/grafana-agent` chart is compatible with
the chart for `grafana/alloy`, with two exceptions:
* The `agent` field in `values.yaml` is deprecated in favor of `alloy`. Support
for the `agent` field will be removed in a future release.
* The default value for `alloy.listenPort` is `12345` to align with the default
listen port in other installations. To retain the previous default, set
`alloy.listenPort` to `80` when installing.
### alloy.stabilityLevel
`alloy.stabilityLevel` controls the minimum level of stability for what
components can be created (directly or through imported modules). Note that
setting this field to a lower stability may also enable internal behaviour of a
lower stability, such as experimental memory optimizations.
Valid settings are `experimental`, `public-preview`, and `generally-available`.
### alloy.extraArgs
`alloy.extraArgs` allows for passing extra arguments to the Grafana Alloy
container. The list of available arguments is documented on [alloy run][].
> **WARNING**: Using `alloy.extraArgs` does not have a stable API. Things may
> break between Chart upgrade if an argument gets added to the template.
[alloy run]: https://grafana.com/docs/alloy/latest/reference/cli/run/
### alloy.extraPorts
`alloy.extraPorts` allows for configuring specific open ports.
The detained specification of ports can be found at the [Kubernetes Pod documents](https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#ports).
Port numbers specified must be 0 < x < 65535.
| ChartPort | KubePort | Description |
|-----------|----------|-------------|
| targetPort | containerPort | Number of port to expose on the pod's IP address. |
| hostPort | hostPort | (Optional) Number of port to expose on the host. Daemonsets taking traffic might find this useful. |
| name | name | If specified, this must be an `IANA_SVC_NAME` and unique within the pod. Each named port in a pod must have a unique name. Name for the port that can be referred to by services.
| protocol | protocol | Must be UDP, TCP, or SCTP. Defaults to "TCP". |
| appProtocol | appProtocol | Hint on application protocol. This is used to expose Alloy externally on OpenShift clusters using "h2c". Optional. No default value. |
### alloy.listenAddr
`alloy.listenAddr` allows for restricting which address Alloy listens on
for network traffic on its HTTP server. By default, this is `0.0.0.0` to allow
its UI to be exposed when port-forwarding and to expose its metrics to other
Alloy instances in the cluster.
### alloy.configMap.config
`alloy.configMap.content` holds the Grafana Alloy configuration to use.
If `alloy.configMap.content` is not provided, a [default configuration file][default-config] is
used. When provided, `alloy.configMap.content` must hold a valid Alloy configuration file.
[default-config]: ./config/example.alloy
### alloy.securityContext
`alloy.securityContext` sets the securityContext passed to the Grafana
Alloy container.
By default, Grafana Alloy containers are not able to collect telemetry from the
host node or other specific types of privileged telemetry data. See [Collecting
logs from other containers][#collecting-logs-from-other-containers] and
[Collecting host node telemetry][#collecting-host-node-telemetry] below for
more information on how to enable these capabilities.
### rbac.create
`rbac.create` enables the creation of ClusterRole and ClusterRoleBindings for
the Grafana Alloy containers to use. The default permission set allows
components like [discovery.kubernetes][] to work properly.
[discovery.kubernetes]: https://grafana.com/docs/alloy/latest/reference/components/discovery.kubernetes/
### controller.autoscaling
`controller.autoscaling.enabled` enables the creation of a HorizontalPodAutoscaler. It is only used when `controller.type` is set to `deployment` or `statefulset`.
`controller.autoscaling` is intended to be used with [clustered][] mode.
> **WARNING**: Using `controller.autoscaling` for any other Grafana Alloy
> configuration could lead to redundant or double telemetry collection.
[clustered]: https://grafana.com/docs/alloy/latest/reference/cli/run/#clustered-mode
When using autoscaling with a StatefulSet controller and have enabled
volumeClaimTemplates to be created alongside the StatefulSet, it is possible to
leak up to `maxReplicas` PVCs when the HPA is scaling down. If you're on
Kubernetes version `>=1.23-0` and your cluster has the
`StatefulSetAutoDeletePVC` feature gate enabled, you can set
`enableStatefulSetAutoDeletePVC` to true to automatically delete stale PVCs.
Using `controller.autoscaling` requires the target metric (cpu/memory) to have
its resource requests set up for both the Alloy and config-reloader containers
so that the HPA can use them to calculate the replica count from the actual
resource utilization.
## Collecting logs from other containers
There are two ways to collect logs from other containers within the cluster
Alloy is deployed in.
### loki.source.kubernetes
The [loki.source.kubernetes][] component may be used to collect logs from
containers using the Kubernetes API. This component does not require mounting
the hosts filesystem into Alloy, nor requires additional security contexts to
work correctly.
[loki.source.kubernetes]: https://grafana.com/docs/alloy/latest/reference/components/loki.source.kubernetes/
### File-based collection
Logs may also be collected by mounting the host's filesystem into the Alloy
container, bypassing the need to communicate with the Kubrnetes API.
To mount logs from other containers to Grafana Alloy directly:
* Set `alloy.mounts.dockercontainers` to `true`.
* Set `alloy.securityContext` to:
```yaml
privileged: true
runAsUser: 0
```
## Collecting host node telemetry
Telemetry from the host, such as host-specific log files (from `/var/logs`) or
metrics from `/proc` and `/sys` are not accessible to Grafana Alloy containers.
To expose this information to Grafana Alloy for telemetry collection:
* Set `alloy.mounts.dockercontainers` to `true`.
* Mount `/proc` and `/sys` from the host into the container.
* Set `alloy.securityContext` to:
```yaml
privileged: true
runAsUser: 0
```
## Expose Alloy externally on OpenShift clusters
If you want to send telemetry from an Alloy instance outside of the OpenShift clusters over gRPC towards the Alloy instance on the OpenShift clusters, you need to:
* Set the optional `appProtocol` on `alloy.extraPorts` to `h2c`
* Expose the service via Ingress or Route within the OpenShift cluster. Example of a Route in OpenShift:
```yaml
kind: Route
apiVersion: route.openshift.io/v1
metadata:
name: route-otlp-alloy-h2c
spec:
to:
kind: Service
name: test-grpc-h2c
weight: 100
port:
targetPort: otlp-grpc
tls:
termination: edge
insecureEdgeTerminationPolicy: Redirect
wildcardPolicy: None
```
Once this Ingress/Route is exposed it would then allow gRPC communication for (for example) traces. This allow an Alloy instance on a VM or another Kubernetes/OpenShift cluster to be able to communicate over gRPC via the exposed Ingress or Route.

View File

@@ -0,0 +1,3 @@
apiVersion: v2
name: crds
version: 0.0.0

View File

@@ -0,0 +1,205 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.9.2
creationTimestamp: null
name: podlogs.monitoring.grafana.com
spec:
group: monitoring.grafana.com
names:
categories:
- grafana-alloy
- alloy
kind: PodLogs
listKind: PodLogsList
plural: podlogs
singular: podlogs
scope: Namespaced
versions:
- name: v1alpha2
schema:
openAPIV3Schema:
description: PodLogs defines how to collect logs for a Pod.
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: PodLogsSpec defines how to collect logs for a Pod.
properties:
namespaceSelector:
description: Selector to select which namespaces the Pod objects are
discovered from.
properties:
matchExpressions:
description: matchExpressions is a list of label selector requirements.
The requirements are ANDed.
items:
description: A label selector requirement is a selector that
contains values, a key, and an operator that relates the key
and values.
properties:
key:
description: key is the label key that the selector applies
to.
type: string
operator:
description: operator represents a key's relationship to
a set of values. Valid operators are In, NotIn, Exists
and DoesNotExist.
type: string
values:
description: values is an array of string values. If the
operator is In or NotIn, the values array must be non-empty.
If the operator is Exists or DoesNotExist, the values
array must be empty. This array is replaced during a strategic
merge patch.
items:
type: string
type: array
required:
- key
- operator
type: object
type: array
matchLabels:
additionalProperties:
type: string
description: matchLabels is a map of {key,value} pairs. A single
{key,value} in the matchLabels map is equivalent to an element
of matchExpressions, whose key field is "key", the operator
is "In", and the values array contains only "value". The requirements
are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
relabelings:
description: RelabelConfigs to apply to logs before delivering.
items:
description: 'RelabelConfig allows dynamic rewriting of the label
set, being applied to samples before ingestion. It defines `<metric_relabel_configs>`-section
of Prometheus configuration. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#metric_relabel_configs'
properties:
action:
default: replace
description: Action to perform based on regex matching. Default
is 'replace'. uppercase and lowercase actions require Prometheus
>= 2.36.
enum:
- replace
- Replace
- keep
- Keep
- drop
- Drop
- hashmod
- HashMod
- labelmap
- LabelMap
- labeldrop
- LabelDrop
- labelkeep
- LabelKeep
- lowercase
- Lowercase
- uppercase
- Uppercase
type: string
modulus:
description: Modulus to take of the hash of the source label
values.
format: int64
type: integer
regex:
description: Regular expression against which the extracted
value is matched. Default is '(.*)'
type: string
replacement:
description: Replacement value against which a regex replace
is performed if the regular expression matches. Regex capture
groups are available. Default is '$1'
type: string
separator:
description: Separator placed between concatenated source label
values. default is ';'.
type: string
sourceLabels:
description: The source labels select values from existing labels.
Their content is concatenated using the configured separator
and matched against the configured regular expression for
the replace, keep, and drop actions.
items:
description: LabelName is a valid Prometheus label name which
may only contain ASCII letters, numbers, as well as underscores.
pattern: ^[a-zA-Z_][a-zA-Z0-9_]*$
type: string
type: array
targetLabel:
description: Label to which the resulting value is written in
a replace action. It is mandatory for replace actions. Regex
capture groups are available.
type: string
type: object
type: array
selector:
description: Selector to select Pod objects. Required.
properties:
matchExpressions:
description: matchExpressions is a list of label selector requirements.
The requirements are ANDed.
items:
description: A label selector requirement is a selector that
contains values, a key, and an operator that relates the key
and values.
properties:
key:
description: key is the label key that the selector applies
to.
type: string
operator:
description: operator represents a key's relationship to
a set of values. Valid operators are In, NotIn, Exists
and DoesNotExist.
type: string
values:
description: values is an array of string values. If the
operator is In or NotIn, the values array must be non-empty.
If the operator is Exists or DoesNotExist, the values
array must be empty. This array is replaced during a strategic
merge patch.
items:
type: string
type: array
required:
- key
- operator
type: object
type: array
matchLabels:
additionalProperties:
type: string
description: matchLabels is a map of {key,value} pairs. A single
{key,value} in the matchLabels map is equivalent to an element
of matchExpressions, whose key field is "key", the operator
is "In", and the values array contains only "value". The requirements
are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
required:
- selector
type: object
type: object
served: true
storage: true

View File

@@ -0,0 +1,3 @@
serviceAccount:
additionalLabels:
test: "true"

View File

@@ -0,0 +1,7 @@
alloy:
clustering:
enabled: true
controller:
type: 'statefulset'
replicas: 3

View File

@@ -0,0 +1,5 @@
controller:
type: deployment
podDisruptionBudget:
enabled: true
maxUnavailable: 1

View File

@@ -0,0 +1,5 @@
controller:
type: deployment
podDisruptionBudget:
enabled: true
minAvailable: 1

View File

@@ -0,0 +1,5 @@
controller:
type: statefulset
podDisruptionBudget:
enabled: true
maxUnavailable: 1

View File

@@ -0,0 +1,5 @@
controller:
type: statefulset
podDisruptionBudget:
enabled: true
minAvailable: 1

View File

@@ -0,0 +1,12 @@
controller:
volumes:
extra:
- name: cache-volume
emptyDir:
sizeLimit: 500Mi
alloy:
mounts:
extra:
- mountPath: /cache
name: cache-volume

View File

@@ -0,0 +1,5 @@
# Test rendering of the chart with the controller explicitly set to DaemonSet.
controller:
type: daemonset
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet

View File

@@ -0,0 +1,3 @@
# Test rendering of the chart with the controller explicitly set to DaemonSet.
controller:
type: daemonset

View File

@@ -0,0 +1,26 @@
# Test rendering of the chart with the controller explicitly set to Deployment and autoscaling enabled.
controller:
type: deployment
autoscaling:
horizontal:
enabled: true
scaleDown:
policies:
- type: Pods
value: 4
periodSeconds: 60
selectPolicy: Min
stabilizationWindowSeconds: 100
scaleUp:
policies:
- type: Pods
value: 4
periodSeconds: 60
- type: Percent
value: 100
periodSeconds: 15
stabilizationWindowSeconds: 80
alloy:
resources:
requests:
memory: 100Mi

View File

@@ -0,0 +1,3 @@
# Test rendering of the chart with the controller explicitly set to Deployment.
controller:
type: deployment

View File

@@ -0,0 +1,26 @@
# Test rendering of the chart with the controller explicitly set to StatefulSet and autoscaling the old way enabled.
controller:
type: statefulset
autoscaling:
enabled: true
scaleDown:
policies:
- type: Pods
value: 4
periodSeconds: 60
selectPolicy: Min
stabilizationWindowSeconds: 100
scaleUp:
policies:
- type: Pods
value: 4
periodSeconds: 60
- type: Percent
value: 100
periodSeconds: 15
stabilizationWindowSeconds: 80
enableStatefulSetAutoDeletePVC: true
alloy:
resources:
requests:
memory: 100Mi

View File

@@ -0,0 +1,3 @@
# Test rendering of the chart with the controller explicitly set to StatefulSet.
controller:
type: statefulset

View File

@@ -0,0 +1,10 @@
alloy:
configMap:
content: |-
logging {
level = "warn"
format = "logfmt"
}
discovery.kubernetes "custom_pods" {
role = "pod"
}

View File

@@ -0,0 +1 @@
# Test rendering of the chart with everything set to the default values.

View File

@@ -0,0 +1,9 @@
# Test rendering of the chart with the service monitor enabled
alloy:
listenScheme: HTTPS
service:
enabled: true
serviceMonitor:
enabled: true
tlsConfig:
insecureSkipVerify: true

View File

@@ -0,0 +1,5 @@
# Test rendering of the chart with the service monitor enabled
service:
enabled: true
serviceMonitor:
enabled: true

View File

@@ -0,0 +1,5 @@
# Specify extra ports for verifying rendering the template works
alloy:
envFrom:
- configMapRef:
name: special-config

View File

@@ -0,0 +1,5 @@
alloy:
configMap:
create: false
name: existing-config
key: my-config.alloy

View File

@@ -0,0 +1,9 @@
# Specify extra ports for verifying rendering the template works
alloy:
extraEnv:
- name: GREETING
value: "Warm greetings to"
- name: HONORIFIC
value: "The Most Honorable"
- name: NAME
value: "Kubernetes"

View File

@@ -0,0 +1,8 @@
extraObjects:
- apiVersion: v1
kind: Secret
metadata:
name: grafana-cloud
stringData:
PROMETHEUS_HOST: 'https://prometheus-us-central1.grafana.net/api/prom/push'
PROMETHEUS_USERNAME: '123456'

View File

@@ -0,0 +1,7 @@
# Specify extra ports for verifying rendering the template works
alloy:
extraPorts:
- name: jaeger-thrift
port: 14268
targetPort: 14268
protocol: TCP

View File

@@ -0,0 +1,9 @@
alloy:
extraPorts:
- name: "faro"
port: 12347
targetPort: 12347
protocol: "TCP"
ingress:
enabled: true

View File

@@ -0,0 +1,13 @@
# Test rendering of the chart with the global image pull secret explicitly set.
global:
image:
pullSecrets:
- name: global-cred
podSecurityContext:
runAsUser: 1000
runAsGroup: 1000
image:
pullSecrets:
- name: local-cred

View File

@@ -0,0 +1,11 @@
# Test rendering of the chart with the global image registry explicitly set to another value.
global:
image:
registry: quay.io
image:
registry: docker.com # Invalid value by default
configReloader:
image:
registry: docker.com

View File

@@ -0,0 +1,5 @@
alloy:
hostAliases:
- ip: "20.21.22.23"
hostnames:
- "grafana.company.net"

View File

@@ -0,0 +1,29 @@
controller:
initContainers:
- name: geo-ip
image: ghcr.io/maxmind/geoipupdate:v6.0
volumeMounts:
- name: geoip
mountPath: /etc/geoip
volumes:
- name: geoip
emptyDir: {}
env:
- name: GEOIPUPDATE_ACCOUNT_ID
value: "geoipupdate_account_id"
- name: GEOIPUPDATE_LICENSE_KEY
value: "geoipupdate_license_key"
- name: GEOIPUPDATE_EDITION_IDS
value: "GeoLite2-ASN GeoLite2-City GeoLite2-Country"
- name: GEOIPUPDATE_DB_DIR
value: "/etc/geoip"
volumes:
extra:
- name: geoip
mountPath: /etc/geoip
alloy:
mounts:
extra:
- name: geoip
mountPath: /etc/geoip

View File

@@ -0,0 +1,8 @@
controller:
type: deployment
alloy:
lifecycle:
preStop:
exec:
command: ["/bin/sh", "-c", "sleep 1"]

View File

@@ -0,0 +1,11 @@
alloy:
livenessProbe:
httpGet:
path: /metrics
port: 12345
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 2
periodSeconds: 30
successThreshold: 1
failureThreshold: 3

View File

@@ -0,0 +1,4 @@
# Test rendering of the chart with the image pull secret explicitly set.
image:
pullSecrets:
- name: local-cred

View File

@@ -0,0 +1,7 @@
# Test rendering of the chart with the individual image registries explicitly set to another value.
image:
registry: quay.io
configReloader:
image:
registry: quay.io

View File

@@ -0,0 +1,11 @@
controller:
nodeSelector:
key1: "value1"
tolerations:
- key: "key1"
operator: "Equal"
value: "value1"
effect: "NoSchedule"
- key: "key2"
operator: "Exists"
effect: "NoSchedule"

View File

@@ -0,0 +1,7 @@
global:
podSecurityContext:
fsGroup: 473
alloy:
securityContext:
runAsUser: 473
runAsGroup: 473

View File

@@ -0,0 +1,4 @@
# Test correct rendering of the pod annotations
controller:
podAnnotations:
testAnnotationKey: testAnnotationValue

View File

@@ -0,0 +1,29 @@
controller:
extraContainers:
- name: geo-ip
image: ghcr.io/maxmind/geoipupdate:v6.0
volumeMounts:
- name: geoip
mountPath: /etc/geoip
volumes:
- name: geoip
emptyDir: {}
env:
- name: GEOIPUPDATE_ACCOUNT_ID
value: "geoipupdate_account_id"
- name: GEOIPUPDATE_LICENSE_KEY
value: "geoipupdate_license_key"
- name: GEOIPUPDATE_EDITION_IDS
value: "GeoLite2-ASN GeoLite2-City GeoLite2-Country"
- name: GEOIPUPDATE_DB_DIR
value: "/etc/geoip"
volumes:
extra:
- name: geoip
mountPath: /etc/geoip
alloy:
mounts:
extra:
- name: geoip
mountPath: /etc/geoip

View File

@@ -0,0 +1,3 @@
controller:
type: deployment
terminationGracePeriodSeconds: 20

View File

@@ -0,0 +1,10 @@
controller:
type: deployment
topologySpreadConstraints:
- maxSkew: 1
topologyKey: topology.kubernetes.io/zone
whenUnsatisfiable: ScheduleAnyway
labelSelector:
matchLabels:
app.kubernetes.io/name: alloy
app.kubernetes.io/instance: alloy

View File

@@ -0,0 +1,10 @@
image:
registry: "docker.io"
repository: "grafana/agent"
digest: "sha256:82575a7be3e4770e53f620298e58bcc4cdb0fd0338e01c4b206cae9e3ca46ebf"
configReloader:
image:
registry: "docker.io"
repository: "jimmidyson/configmap-reload"
digest: "sha256:5af9d3041d12a3e63f115125f89b66d2ba981fe82e64302ac370c5496055059c"

View File

@@ -0,0 +1,28 @@
logging {
level = "info"
format = "logfmt"
}
discovery.kubernetes "pods" {
role = "pod"
}
discovery.kubernetes "nodes" {
role = "node"
}
discovery.kubernetes "services" {
role = "service"
}
discovery.kubernetes "endpoints" {
role = "endpoints"
}
discovery.kubernetes "endpointslices" {
role = "endpointslice"
}
discovery.kubernetes "ingresses" {
role = "ingress"
}

View File

@@ -0,0 +1 @@
Welcome to Grafana Alloy!

View File

@@ -0,0 +1,25 @@
{{/*
Retrieve configMap name from the name of the chart or the ConfigMap the user
specified.
*/}}
{{- define "alloy.config-map.name" -}}
{{- $values := (mustMergeOverwrite .Values.alloy (or .Values.agent dict)) -}}
{{- if $values.configMap.name -}}
{{- $values.configMap.name }}
{{- else -}}
{{- include "alloy.fullname" . }}
{{- end }}
{{- end }}
{{/*
The name of the config file is the default or the key the user specified in the
ConfigMap.
*/}}
{{- define "alloy.config-map.key" -}}
{{- $values := (mustMergeOverwrite .Values.alloy (or .Values.agent dict)) -}}
{{- if $values.configMap.key -}}
{{- $values.configMap.key }}
{{- else -}}
config.alloy
{{- end }}
{{- end }}

View File

@@ -0,0 +1,162 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "alloy.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "alloy.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "alloy.chart" -}}
{{- if index .Values "$chart_tests" }}
{{- printf "%s" .Chart.Name | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{/*
Allow the release namespace to be overridden for multi-namespace deployments in combined charts
*/}}
{{- define "alloy.namespace" -}}
{{- if .Values.namespaceOverride }}
{{- .Values.namespaceOverride }}
{{- else }}
{{- .Release.Namespace }}
{{- end }}
{{- end }}
{{/*
Common labels
*/}}
{{- define "alloy.labels" -}}
helm.sh/chart: {{ include "alloy.chart" . }}
{{ include "alloy.selectorLabels" . }}
{{- if index .Values "$chart_tests" }}
app.kubernetes.io/version: "vX.Y.Z"
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- else -}}
{{/* substr trims delimeter prefix char from alloy.imageId output
e.g. ':' for tags and '@' for digests.
For digests, we crop the string to a 7-char (short) sha. */}}
app.kubernetes.io/version: {{ (include "alloy.imageId" .) | trunc 15 | trimPrefix "@sha256" | trimPrefix ":" | quote }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/part-of: alloy
{{- end }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "alloy.selectorLabels" -}}
app.kubernetes.io/name: {{ include "alloy.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}
{{/*
Create the name of the service account to use
*/}}
{{- define "alloy.serviceAccountName" -}}
{{- if .Values.serviceAccount.create }}
{{- default (include "alloy.fullname" .) .Values.serviceAccount.name }}
{{- else }}
{{- default "default" .Values.serviceAccount.name }}
{{- end }}
{{- end }}
{{/*
Calculate name of image ID to use for "alloy.
*/}}
{{- define "alloy.imageId" -}}
{{- if .Values.image.digest }}
{{- $digest := .Values.image.digest }}
{{- if not (hasPrefix "sha256:" $digest) }}
{{- $digest = printf "sha256:%s" $digest }}
{{- end }}
{{- printf "@%s" $digest }}
{{- else if .Values.image.tag }}
{{- printf ":%s" .Values.image.tag }}
{{- else }}
{{- printf ":%s" .Chart.AppVersion }}
{{- end }}
{{- end }}
{{/*
Calculate name of image ID to use for "config-reloader".
*/}}
{{- define "config-reloader.imageId" -}}
{{- if .Values.configReloader.image.digest }}
{{- $digest := .Values.configReloader.image.digest }}
{{- if not (hasPrefix "sha256:" $digest) }}
{{- $digest = printf "sha256:%s" $digest }}
{{- end }}
{{- printf "@%s" $digest }}
{{- else if .Values.configReloader.image.tag }}
{{- printf ":%s" .Values.configReloader.image.tag }}
{{- else }}
{{- printf ":%s" "v0.8.0" }}
{{- end }}
{{- end }}
{{/*
Return the appropriate apiVersion for ingress.
*/}}
{{- define "alloy.ingress.apiVersion" -}}
{{- if and ($.Capabilities.APIVersions.Has "networking.k8s.io/v1") (semverCompare ">= 1.19-0" .Capabilities.KubeVersion.Version) }}
{{- print "networking.k8s.io/v1" }}
{{- else if $.Capabilities.APIVersions.Has "networking.k8s.io/v1beta1" }}
{{- print "networking.k8s.io/v1beta1" }}
{{- else }}
{{- print "extensions/v1beta1" }}
{{- end }}
{{- end }}
{{/*
Return if ingress is stable.
*/}}
{{- define "alloy.ingress.isStable" -}}
{{- eq (include "alloy.ingress.apiVersion" .) "networking.k8s.io/v1" }}
{{- end }}
{{/*
Return if ingress supports ingressClassName.
*/}}
{{- define "alloy.ingress.supportsIngressClassName" -}}
{{- or (eq (include "alloy.ingress.isStable" .) "true") (and (eq (include "alloy.ingress.apiVersion" .) "networking.k8s.io/v1beta1") (semverCompare ">= 1.18-0" .Capabilities.KubeVersion.Version)) }}
{{- end }}
{{/*
Return if ingress supports pathType.
*/}}
{{- define "alloy.ingress.supportsPathType" -}}
{{- or (eq (include "alloy.ingress.isStable" .) "true") (and (eq (include "alloy.ingress.apiVersion" .) "networking.k8s.io/v1beta1") (semverCompare ">= 1.18-0" .Capabilities.KubeVersion.Version)) }}
{{- end }}
{{/*
Return the appropriate apiVersion for PodDisruptionBudget.
*/}}
{{- define "alloy.controller.pdb.apiVersion" -}}
{{- if and (.Capabilities.APIVersions.Has "policy/v1") (semverCompare ">=1.21-0" .Capabilities.KubeVersion.Version) -}}
{{- print "policy/v1" -}}
{{- else -}}
{{- print "policy/v1beta1" -}}
{{- end -}}
{{- end -}}

View File

@@ -0,0 +1,38 @@
{{- $values := (mustMergeOverwrite .Values.alloy (or .Values.agent dict)) -}}
{{- if $values.clustering.enabled -}}
apiVersion: v1
kind: Service
metadata:
name: {{ include "alloy.fullname" . }}-cluster
namespace: {{ include "alloy.namespace" . }}
labels:
{{- include "alloy.labels" . | nindent 4 }}
app.kubernetes.io/component: networking
spec:
type: ClusterIP
clusterIP: 'None'
publishNotReadyAddresses: true
selector:
{{- include "alloy.selectorLabels" . | nindent 4 }}
ports:
# Do not include the -metrics suffix in the port name, otherwise metrics
# can be double-collected with the non-headless Service if it's also
# enabled.
#
# This service should only be used for clustering, and not metric
# collection.
- name: {{ $values.clustering.portName }}
port: {{ $values.listenPort }}
targetPort: {{ $values.listenPort }}
protocol: "TCP"
{{- range $portMap := $values.extraPorts }}
- name: {{ $portMap.name }}
port: {{ $portMap.port }}
targetPort: {{ $portMap.targetPort }}
protocol: {{ coalesce $portMap.protocol "TCP" }}
{{- if not (empty $portMap.appProtocol) }}
# Useful for OpenShift clusters that want to expose Alloy ports externally
appProtocol: {{ $portMap.appProtocol }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,17 @@
{{- $values := (mustMergeOverwrite .Values.alloy (or .Values.agent dict)) -}}
{{- if $values.configMap.create }}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "alloy.config-map.name" . }}
namespace: {{ include "alloy.namespace" . }}
labels:
{{- include "alloy.labels" . | nindent 4 }}
app.kubernetes.io/component: config
data:
{{- if $values.configMap.content }}
config.alloy: |- {{- (tpl $values.configMap.content .) | nindent 4 }}
{{- else }}
config.alloy: |- {{- .Files.Get "config/example.alloy" | trim | nindent 4 }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,92 @@
{{- define "alloy.container" -}}
{{- $values := (mustMergeOverwrite .Values.alloy (or .Values.agent dict)) -}}
- name: alloy
image: {{ .Values.global.image.registry | default .Values.image.registry }}/{{ .Values.image.repository }}{{ include "alloy.imageId" . }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
args:
- run
- /etc/alloy/{{ include "alloy.config-map.key" . }}
- --storage.path={{ $values.storagePath }}
- --server.http.listen-addr={{ $values.listenAddr }}:{{ $values.listenPort }}
- --server.http.ui-path-prefix={{ $values.uiPathPrefix }}
{{- if not $values.enableReporting }}
- --disable-reporting
{{- end}}
{{- if $values.clustering.enabled }}
- --cluster.enabled=true
- --cluster.join-addresses={{ include "alloy.fullname" . }}-cluster
{{- if $values.clustering.name }}
- --cluster.name={{ $values.clustering.name }}
{{- end}}
{{- end}}
{{- if $values.stabilityLevel }}
- --stability.level={{ $values.stabilityLevel }}
{{- end }}
{{- range $values.extraArgs }}
- {{ . }}
{{- end}}
env:
- name: ALLOY_DEPLOY_MODE
value: "helm"
- name: HOSTNAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
{{- range $values.extraEnv }}
- {{- toYaml . | nindent 6 }}
{{- end }}
{{- if $values.envFrom }}
envFrom:
{{- toYaml $values.envFrom | nindent 4 }}
{{- end }}
ports:
- containerPort: {{ $values.listenPort }}
name: http-metrics
{{- range $portMap := $values.extraPorts }}
- containerPort: {{ $portMap.targetPort }}
{{- if $portMap.hostPort }}
hostPort: {{ $portMap.hostPort }}
{{- end}}
name: {{ $portMap.name }}
protocol: {{ coalesce $portMap.protocol "TCP" }}
{{- end }}
readinessProbe:
httpGet:
path: /-/ready
port: {{ $values.listenPort }}
scheme: {{ $values.listenScheme }}
initialDelaySeconds: 10
timeoutSeconds: 1
{{- with $values.livenessProbe }}
livenessProbe:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- with $values.resources }}
resources:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- with $values.lifecycle }}
lifecycle:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- with $values.securityContext }}
securityContext:
{{- toYaml . | nindent 4 }}
{{- end }}
volumeMounts:
- name: config
mountPath: /etc/alloy
{{- if $values.mounts.varlog }}
- name: varlog
mountPath: /var/log
readOnly: true
{{- end }}
{{- if $values.mounts.dockercontainers }}
- name: dockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
{{- end }}
{{- range $values.mounts.extra }}
- {{- toYaml . | nindent 6 }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,26 @@
{{- define "alloy.watch-container" -}}
{{- $values := (mustMergeOverwrite .Values.alloy (or .Values.agent dict)) -}}
{{- if .Values.configReloader.enabled -}}
- name: config-reloader
image: {{ .Values.global.image.registry | default .Values.configReloader.image.registry }}/{{ .Values.configReloader.image.repository }}{{ include "config-reloader.imageId" . }}
{{- if .Values.configReloader.customArgs }}
args:
{{- toYaml .Values.configReloader.customArgs | nindent 4 }}
{{- else }}
args:
- --watched-dir=/etc/alloy
- --reload-url=http://localhost:{{ $values.listenPort }}/-/reload
{{- end }}
volumeMounts:
- name: config
mountPath: /etc/alloy
{{- with .Values.configReloader.resources }}
resources:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- with .Values.configReloader.securityContext }}
securityContext:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }}
{{- end -}}

View File

@@ -0,0 +1,93 @@
{{- define "alloy.pod-template" -}}
{{- $values := (mustMergeOverwrite .Values.alloy (or .Values.agent dict)) -}}
metadata:
annotations:
kubectl.kubernetes.io/default-container: alloy
{{- if and $values.configMap.create $values.configMap.content }}
checksum/config: {{ (tpl $values.configMap.content .) | sha256sum | trunc 63 }}
{{- end }}
{{- with .Values.controller.podAnnotations }}
{{- toYaml . | nindent 4 }}
{{- end }}
labels:
{{- include "alloy.selectorLabels" . | nindent 4 }}
{{- with .Values.controller.podLabels }}
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- with .Values.global.podSecurityContext }}
securityContext:
{{- toYaml . | nindent 4 }}
{{- end }}
serviceAccountName: {{ include "alloy.serviceAccountName" . }}
{{- if or .Values.global.image.pullSecrets .Values.image.pullSecrets }}
imagePullSecrets:
{{- if .Values.global.image.pullSecrets }}
{{- toYaml .Values.global.image.pullSecrets | nindent 4 }}
{{- else }}
{{- toYaml .Values.image.pullSecrets | nindent 4 }}
{{- end }}
{{- end }}
{{- if .Values.controller.initContainers }}
initContainers:
{{- with .Values.controller.initContainers }}
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }}
containers:
{{- include "alloy.container" . | nindent 4 }}
{{- include "alloy.watch-container" . | nindent 4 }}
{{- with .Values.controller.extraContainers }}
{{- toYaml . | nindent 4 }}
{{- end}}
{{- if .Values.controller.priorityClassName }}
priorityClassName: {{ .Values.controller.priorityClassName }}
{{- end }}
{{- if .Values.controller.hostNetwork }}
hostNetwork: {{ .Values.controller.hostNetwork }}
{{- end }}
{{- if .Values.controller.hostPID }}
hostPID: {{ .Values.controller.hostPID }}
{{- end }}
dnsPolicy: {{ .Values.controller.dnsPolicy }}
{{- with .Values.controller.affinity }}
affinity:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- if .Values.controller.terminationGracePeriodSeconds }}
terminationGracePeriodSeconds: {{ .Values.controller.terminationGracePeriodSeconds | int }}
{{- end }}
{{- with .Values.controller.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- with .Values.controller.tolerations }}
tolerations:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- with .Values.controller.topologySpreadConstraints }}
topologySpreadConstraints:
{{- toYaml . | nindent 4 }}
{{- end }}
volumes:
- name: config
configMap:
name: {{ include "alloy.config-map.name" . }}
{{- if $values.mounts.varlog }}
- name: varlog
hostPath:
path: /var/log
{{- end }}
{{- if $values.mounts.dockercontainers }}
- name: dockercontainers
hostPath:
path: /var/lib/docker/containers
{{- end }}
{{- if .Values.controller.volumes.extra }}
{{- toYaml .Values.controller.volumes.extra | nindent 4 }}
{{- end }}
{{- if $values.hostAliases }}
hostAliases:
{{- toYaml $values.hostAliases | nindent 4 }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,26 @@
{{- if eq .Values.controller.type "daemonset" }}
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: {{ include "alloy.fullname" . }}
namespace: {{ include "alloy.namespace" . }}
labels:
{{- include "alloy.labels" . | nindent 4 }}
{{- with .Values.controller.extraAnnotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if ge (int .Capabilities.KubeVersion.Minor) 22 }}
minReadySeconds: 10
{{- end }}
selector:
matchLabels:
{{- include "alloy.selectorLabels" . | nindent 6 }}
template:
{{- include "alloy.pod-template" . | nindent 4 }}
{{- with .Values.controller.updateStrategy }}
updateStrategy:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,29 @@
{{- if eq .Values.controller.type "deployment" }}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "alloy.fullname" . }}
namespace: {{ include "alloy.namespace" . }}
labels:
{{- include "alloy.labels" . | nindent 4 }}
{{- with .Values.controller.extraAnnotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if not .Values.controller.autoscaling.enabled }}
replicas: {{ .Values.controller.replicas }}
{{- end }}
{{- if ge (int .Capabilities.KubeVersion.Minor) 22 }}
minReadySeconds: 10
{{- end }}
selector:
matchLabels:
{{- include "alloy.selectorLabels" . | nindent 6 }}
template:
{{- include "alloy.pod-template" . | nindent 4 }}
{{- with .Values.controller.updateStrategy }}
strategy:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,51 @@
{{- if eq .Values.controller.type "statefulset" }}
{{- if .Values.enableStatefulSetAutoDeletePVC }}
{{- fail "Value 'enableStatefulSetAutoDeletePVC' should be nested inside 'controller' options." }}
{{- end }}
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ include "alloy.fullname" . }}
namespace: {{ include "alloy.namespace" . }}
labels:
{{- include "alloy.labels" . | nindent 4 }}
{{- with .Values.controller.extraAnnotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if not .Values.controller.autoscaling.enabled }}
replicas: {{ .Values.controller.replicas }}
{{- end }}
{{- if .Values.controller.parallelRollout }}
podManagementPolicy: Parallel
{{- end }}
{{- if ge (int .Capabilities.KubeVersion.Minor) 22 }}
minReadySeconds: 10
{{- end }}
serviceName: {{ include "alloy.fullname" . }}
selector:
matchLabels:
{{- include "alloy.selectorLabels" . | nindent 6 }}
template:
{{- include "alloy.pod-template" . | nindent 4 }}
{{- with .Values.controller.updateStrategy }}
updateStrategy:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- with .Values.controller.volumeClaimTemplates }}
volumeClaimTemplates:
{{- range . }}
- {{ toYaml . | nindent 6 }}
{{- end }}
{{- end }}
{{- if and (semverCompare ">= 1.23-0" .Capabilities.KubeVersion.Version) (.Values.controller.enableStatefulSetAutoDeletePVC) }}
{{- /*
Data on the read nodes is easy to replace, so we want to always delete PVCs to make
operation easier, and will rely on re-fetching data when needed.
*/}}
persistentVolumeClaimRetentionPolicy:
whenDeleted: Delete
whenScaled: Delete
{{- end }}
{{- end }}

View File

@@ -0,0 +1,4 @@
{{ range .Values.extraObjects }}
---
{{ tpl (toYaml .) $ }}
{{ end }}

View File

@@ -0,0 +1,83 @@
{{- $values := (mustMergeOverwrite .Values.alloy (or .Values.agent dict)) -}}
{{- if and (or (eq .Values.controller.type "deployment") (eq .Values.controller.type "statefulset" )) (or .Values.controller.autoscaling.horizontal.enabled .Values.controller.autoscaling.enabled) }}
{{ $autoscaling := .Values.controller.autoscaling }}
{{- if .Values.controller.autoscaling.horizontal.enabled }}
{{- $autoscaling = .Values.controller.autoscaling.horizontal }}
{{- end }}
{{- if (not (empty $autoscaling.targetMemoryUtilizationPercentage)) }}
{{- $_ := $values.resources.requests | required ".Values.alloy.resources.requests is required when using autoscaling." -}}
{{- $_ := $values.resources.requests.memory | required ".Values.alloy.resources.requests.memory is required when using autoscaling based on memory utilization." -}}
{{- $_ := .Values.configReloader.resources.requests | required ".Values.configReloader.resources.requests is required when using autoscaling." -}}
{{- $_ := .Values.configReloader.resources.requests.memory | required ".Values.configReloader.resources.requests.memory is required when using autoscaling based on memory utilization." -}}
{{- end}}
{{- if (not (empty $autoscaling.targetCPUUtilizationPercentage)) }}
{{- $_ := $values.resources.requests | required ".Values.alloy.resources.requests is required when using autoscaling." -}}
{{- $_ := $values.resources.requests.cpu | required ".Values.alloy.resources.requests.cpu is required when using autoscaling based on cpu utilization." -}}
{{- $_ := .Values.configReloader.resources.requests | required ".Values.configReloader.resources.requests is required when using autoscaling." -}}
{{- $_ := .Values.configReloader.resources.requests.cpu | required ".Values.configReloader.resources.requests.cpu is required when using autoscaling based on cpu utilization." -}}
{{- end}}
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: {{ include "alloy.fullname" . }}
namespace: {{ include "alloy.namespace" . }}
labels:
{{- include "alloy.labels" . | nindent 4 }}
app.kubernetes.io/component: availability
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: {{ .Values.controller.type }}
name: {{ include "alloy.fullname" . }}
{{- with $autoscaling }}
minReplicas: {{ .minReplicas }}
maxReplicas: {{ .maxReplicas }}
behavior:
{{- with .scaleDown }}
scaleDown:
{{- if .policies }}
policies:
{{- range .policies }}
- type: {{ .type }}
value: {{ .value }}
periodSeconds: {{ .periodSeconds }}
{{- end }}
selectPolicy: {{ .selectPolicy }}
{{- end }}
stabilizationWindowSeconds: {{ .stabilizationWindowSeconds }}
{{- end }}
{{- with .scaleUp }}
scaleUp:
{{- if .policies }}
policies:
{{- range .policies }}
- type: {{ .type }}
value: {{ .value }}
periodSeconds: {{ .periodSeconds }}
{{- end }}
selectPolicy: {{ .selectPolicy }}
{{- end }}
stabilizationWindowSeconds: {{ .stabilizationWindowSeconds }}
{{- end }}
metrics:
# Changing the order of the metrics will cause ArgoCD to go into a sync loop
# memory needs to be first.
# More info in: https://github.com/argoproj/argo-cd/issues/1079
{{- with .targetMemoryUtilizationPercentage }}
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: {{ . }}
{{- end }}
{{- with .targetCPUUtilizationPercentage }}
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: {{ . }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,79 @@
{{- if .Values.ingress.enabled -}}
{{- $ingressApiIsStable := eq (include "alloy.ingress.isStable" .) "true" -}}
{{- $ingressSupportsIngressClassName := eq (include "alloy.ingress.supportsIngressClassName" .) "true" -}}
{{- $ingressSupportsPathType := eq (include "alloy.ingress.supportsPathType" .) "true" -}}
{{- $fullName := include "alloy.fullname" . -}}
{{- $servicePort := .Values.ingress.faroPort -}}
{{- $ingressPath := .Values.ingress.path -}}
{{- $ingressPathType := .Values.ingress.pathType -}}
{{- $extraPaths := .Values.ingress.extraPaths -}}
apiVersion: {{ include "alloy.ingress.apiVersion" . }}
kind: Ingress
metadata:
name: {{ $fullName }}
namespace: {{ include "alloy.namespace" . }}
labels:
{{- include "alloy.labels" . | nindent 4 }}
app.kubernetes.io/component: networking
{{- with .Values.ingress.labels }}
{{- toYaml . | nindent 4 }}
{{- end }}
{{- with .Values.ingress.annotations }}
annotations:
{{- range $key, $value := . }}
{{ $key }}: {{ tpl $value $ | quote }}
{{- end }}
{{- end }}
spec:
{{- if and $ingressSupportsIngressClassName .Values.ingress.ingressClassName }}
ingressClassName: {{ .Values.ingress.ingressClassName }}
{{- end -}}
{{- with .Values.ingress.tls }}
tls:
{{- tpl (toYaml .) $ | nindent 4 }}
{{- end }}
rules:
{{- if .Values.ingress.hosts }}
{{- range .Values.ingress.hosts }}
- host: {{ tpl . $ }}
http:
paths:
{{- with $extraPaths }}
{{- toYaml . | nindent 10 }}
{{- end }}
- path: {{ $ingressPath }}
{{- if $ingressSupportsPathType }}
pathType: {{ $ingressPathType }}
{{- end }}
backend:
{{- if $ingressApiIsStable }}
service:
name: {{ $fullName }}
port:
number: {{ $servicePort }}
{{- else }}
serviceName: {{ $fullName }}
servicePort: {{ $servicePort }}
{{- end }}
{{- end }}
{{- else }}
- http:
paths:
- backend:
{{- if $ingressApiIsStable }}
service:
name: {{ $fullName }}
port:
number: {{ $servicePort }}
{{- else }}
serviceName: {{ $fullName }}
servicePort: {{ $servicePort }}
{{- end }}
{{- with $ingressPath }}
path: {{ . }}
{{- end }}
{{- if $ingressSupportsPathType }}
pathType: {{ $ingressPathType }}
{{- end }}
{{- end -}}
{{- end }}

View File

@@ -0,0 +1,31 @@
{{- if .Values.controller.podDisruptionBudget.enabled }}
{{- if eq .Values.controller.type "daemonset" }}
{{- fail "PDBs (Pod Disruption Budgets) are not intended for DaemonSets. Please use a different controller type." }}
{{- end }}
{{- if and .Values.controller.podDisruptionBudget.minAvailable .Values.controller.podDisruptionBudget.maxUnavailable }}
{{- fail "Only one of minAvailable or maxUnavailable should be defined for PodDisruptionBudget" }}
{{- end }}
{{- if not (or .Values.controller.podDisruptionBudget.minAvailable .Values.controller.podDisruptionBudget.maxUnavailable) }}
{{- fail "Either minAvailable or maxUnavailable must be defined for PodDisruptionBudget" }}
{{- end }}
apiVersion: {{ include "alloy.controller.pdb.apiVersion" . }}
kind: PodDisruptionBudget
metadata:
name: {{ include "alloy.fullname" . }}
namespace: {{ include "alloy.namespace" . }}
labels:
{{- include "alloy.labels" . | nindent 4 }}
spec:
selector:
matchLabels:
{{- include "alloy.selectorLabels" . | nindent 6 }}
{{- if .Values.controller.podDisruptionBudget.minAvailable }}
minAvailable: {{ .Values.controller.podDisruptionBudget.minAvailable }}
{{- end }}
{{- if .Values.controller.podDisruptionBudget.maxUnavailable }}
maxUnavailable: {{ .Values.controller.podDisruptionBudget.maxUnavailable }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,112 @@
{{- if .Values.rbac.create }}
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: {{ include "alloy.fullname" . }}
labels:
{{- include "alloy.labels" . | nindent 4 }}
app.kubernetes.io/component: rbac
rules:
# Rules which allow discovery.kubernetes to function.
- apiGroups:
- ""
- "discovery.k8s.io"
- "networking.k8s.io"
resources:
- endpoints
- endpointslices
- ingresses
- nodes
- nodes/proxy
- nodes/metrics
- pods
- services
verbs:
- get
- list
- watch
# Rules which allow loki.source.kubernetes and loki.source.podlogs to work.
- apiGroups:
- ""
resources:
- pods
- pods/log
- namespaces
verbs:
- get
- list
- watch
- apiGroups:
- "monitoring.grafana.com"
resources:
- podlogs
verbs:
- get
- list
- watch
# Rules which allow mimir.rules.kubernetes to work.
- apiGroups: ["monitoring.coreos.com"]
resources:
- prometheusrules
verbs:
- get
- list
- watch
- nonResourceURLs:
- /metrics
verbs:
- get
# Rules for prometheus.kubernetes.*
- apiGroups: ["monitoring.coreos.com"]
resources:
- podmonitors
- servicemonitors
- probes
- scrapeconfigs
verbs:
- get
- list
- watch
# Rules which allow eventhandler to work.
- apiGroups:
- ""
resources:
- events
verbs:
- get
- list
- watch
# needed for remote.kubernetes.*
- apiGroups: [""]
resources:
- "configmaps"
- "secrets"
verbs:
- get
- list
- watch
# needed for otelcol.processor.k8sattributes
- apiGroups: ["apps"]
resources: ["replicasets"]
verbs: ["get", "list", "watch"]
- apiGroups: ["extensions"]
resources: ["replicasets"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ include "alloy.fullname" . }}
labels:
{{- include "alloy.labels" . | nindent 4 }}
app.kubernetes.io/component: rbac
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: {{ include "alloy.fullname" . }}
subjects:
- kind: ServiceAccount
name: {{ include "alloy.serviceAccountName" . }}
namespace: {{ include "alloy.namespace" . }}
{{- end }}

View File

@@ -0,0 +1,43 @@
{{- $values := (mustMergeOverwrite .Values.alloy (or .Values.agent dict)) -}}
{{- if .Values.service.enabled -}}
apiVersion: v1
kind: Service
metadata:
name: {{ include "alloy.fullname" . }}
namespace: {{ include "alloy.namespace" . }}
labels:
{{- include "alloy.labels" . | nindent 4 }}
app.kubernetes.io/component: networking
{{- with .Values.service.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
type: {{ .Values.service.type }}
{{- if .Values.service.clusterIP }}
clusterIP: {{ .Values.service.clusterIP }}
{{- end }}
selector:
{{- include "alloy.selectorLabels" . | nindent 4 }}
{{- if semverCompare ">=1.26-0" .Capabilities.KubeVersion.Version }}
internalTrafficPolicy: {{.Values.service.internalTrafficPolicy}}
{{- end }}
ports:
- name: http-metrics
{{- if eq .Values.service.type "NodePort" }}
nodePort: {{ .Values.service.nodePort }}
{{- end }}
port: {{ $values.listenPort }}
targetPort: {{ $values.listenPort }}
protocol: "TCP"
{{- range $portMap := $values.extraPorts }}
- name: {{ $portMap.name }}
port: {{ $portMap.port }}
targetPort: {{ $portMap.targetPort }}
protocol: {{ coalesce $portMap.protocol "TCP" }}
{{- if not (empty $portMap.appProtocol) }}
# Useful for OpenShift clusters that want to expose Alloy ports externally
appProtocol: {{ $portMap.appProtocol }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,18 @@
{{- if .Values.serviceAccount.create -}}
apiVersion: v1
kind: ServiceAccount
automountServiceAccountToken: {{ .Values.serviceAccount.automountServiceAccountToken }}
metadata:
name: {{ include "alloy.serviceAccountName" . }}
namespace: {{ include "alloy.namespace" . }}
labels:
{{- include "alloy.labels" . | nindent 4 }}
app.kubernetes.io/component: rbac
{{- with .Values.serviceAccount.additionalLabels }}
{{- toYaml . | nindent 4 }}
{{- end }}
{{- with .Values.serviceAccount.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,37 @@
{{- $values := (mustMergeOverwrite .Values.alloy (or .Values.agent dict)) -}}
{{- if and .Values.service.enabled .Values.serviceMonitor.enabled -}}
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: {{ include "alloy.fullname" . }}
namespace: {{ include "alloy.namespace" . }}
labels:
{{- include "alloy.labels" . | nindent 4 }}
app.kubernetes.io/component: metrics
{{- with .Values.serviceMonitor.additionalLabels }}
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
endpoints:
- port: http-metrics
scheme: {{ $values.listenScheme | lower }}
honorLabels: true
{{- if .Values.serviceMonitor.interval }}
interval: {{ .Values.serviceMonitor.interval }}
{{- end }}
{{- if .Values.serviceMonitor.metricRelabelings }}
metricRelabelings:
{{ tpl (toYaml .Values.serviceMonitor.metricRelabelings | nindent 6) . }}
{{- end }}
{{- if .Values.serviceMonitor.relabelings }}
relabelings:
{{ tpl (toYaml .Values.serviceMonitor.relabelings | nindent 6) . }}
{{- end }}
{{- with .Values.serviceMonitor.tlsConfig }}
tlsConfig:
{{- toYaml . | nindent 6 }}
{{- end }}
selector:
matchLabels:
{{- include "alloy.selectorLabels" . | nindent 6 }}
{{- end }}

View File

@@ -0,0 +1,41 @@
{{- if .Capabilities.APIVersions.Has "autoscaling.k8s.io/v1" -}}
{{- if .Values.controller.autoscaling.vertical.enabled -}}
apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
name: {{ include "alloy.fullname" . }}
labels:
{{- include "alloy.labels" . | nindent 4 }}
app.kubernetes.io/component: availability
spec:
{{- with .Values.controller.autoscaling.vertical }}
{{- with .recommenders }}
recommenders:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- with .resourcePolicy }}
resourcePolicy:
{{- with .containerPolicies }}
containerPolicies:
{{- range . }}
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }}
{{- end }}
{{- with .updatePolicy }}
updatePolicy:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }}
targetRef:
apiVersion: apps/v1
{{- if eq .Values.controller.type "deployment" }}
kind: Deployment
{{- else if eq .Values.controller.type "statefulset" }}
kind: StatefulSet
{{- else }}
kind: DaemonSet
{{- end }}
name: {{ include "alloy.fullname" . }}
{{- end }}
{{- end }}

463
charts/alloy/values.yaml Normal file
View File

@@ -0,0 +1,463 @@
# -- Overrides the chart's name. Used to change the infix in the resource names.
nameOverride: null
# -- Overrides the chart's namespace.
namespaceOverride: null
# -- Overrides the chart's computed fullname. Used to change the full prefix of
# resource names.
fullnameOverride: null
## Global properties for image pulling override the values defined under `image.registry` and `configReloader.image.registry`.
## If you want to override only one image registry, use the specific fields but if you want to override them all, use `global.image.registry`
global:
image:
# -- Global image registry to use if it needs to be overridden for some specific use cases (e.g local registries, custom images, ...)
registry: ""
# -- Optional set of global image pull secrets.
pullSecrets: []
# -- Security context to apply to the Grafana Alloy pod.
podSecurityContext: {}
crds:
# -- Whether to install CRDs for monitoring.
create: true
## Various Alloy settings. For backwards compatibility with the grafana-agent
## chart, this field may also be called "agent". Naming this field "agent" is
## deprecated and will be removed in a future release.
alloy:
configMap:
# -- Create a new ConfigMap for the config file.
create: true
# -- Content to assign to the new ConfigMap. This is passed into `tpl` allowing for templating from values.
content: ''
# -- Name of existing ConfigMap to use. Used when create is false.
name: null
# -- Key in ConfigMap to get config from.
key: null
clustering:
# -- Deploy Alloy in a cluster to allow for load distribution.
enabled: false
# -- Name for the Alloy cluster. Used for differentiating between clusters.
name: ""
# -- Name for the port used for clustering, useful if running inside an Istio Mesh
portName: http
# -- Minimum stability level of components and behavior to enable. Must be
# one of "experimental", "public-preview", or "generally-available".
stabilityLevel: "generally-available"
# -- Path to where Grafana Alloy stores data (for example, the Write-Ahead Log).
# By default, data is lost between reboots.
storagePath: /tmp/alloy
# -- Address to listen for traffic on. 0.0.0.0 exposes the UI to other
# containers.
listenAddr: 0.0.0.0
# -- Port to listen for traffic on.
listenPort: 12345
# -- Scheme is needed for readiness probes. If enabling tls in your configs, set to "HTTPS"
listenScheme: HTTP
# -- Base path where the UI is exposed.
uiPathPrefix: /
# -- Enables sending Grafana Labs anonymous usage stats to help improve Grafana
# Alloy.
enableReporting: true
# -- Extra environment variables to pass to the Alloy container.
extraEnv: []
# -- Maps all the keys on a ConfigMap or Secret as environment variables. https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#envfromsource-v1-core
envFrom: []
# -- Extra args to pass to `alloy run`: https://grafana.com/docs/alloy/latest/reference/cli/run/
extraArgs: []
# -- Extra ports to expose on the Alloy container.
extraPorts: []
# - name: "faro"
# port: 12347
# targetPort: 12347
# protocol: "TCP"
# appProtocol: "h2c"
# -- Host aliases to add to the Alloy container.
hostAliases: []
# - ip: "20.21.22.23"
# hostnames:
# - "company.grafana.net"
mounts:
# -- Mount /var/log from the host into the container for log collection.
varlog: false
# -- Mount /var/lib/docker/containers from the host into the container for log
# collection.
dockercontainers: false
# -- Extra volume mounts to add into the Grafana Alloy container. Does not
# affect the watch container.
extra: []
# -- Security context to apply to the Grafana Alloy container.
securityContext: {}
# -- Resource requests and limits to apply to the Grafana Alloy container.
resources: {}
# -- Set lifecycle hooks for the Grafana Alloy container.
lifecycle: {}
# preStop:
# exec:
# command:
# - /bin/sleep
# - "10"
# -- Set livenessProbe for the Grafana Alloy container.
livenessProbe: {}
image:
# -- Grafana Alloy image registry (defaults to docker.io)
registry: "docker.io"
# -- Grafana Alloy image repository.
repository: grafana/alloy
# -- (string) Grafana Alloy image tag. When empty, the Chart's appVersion is
# used.
tag: null
# -- Grafana Alloy image's SHA256 digest (either in format "sha256:XYZ" or "XYZ"). When set, will override `image.tag`.
digest: null
# -- Grafana Alloy image pull policy.
pullPolicy: IfNotPresent
# -- Optional set of image pull secrets.
pullSecrets: []
rbac:
# -- Whether to create RBAC resources for Alloy.
create: true
serviceAccount:
# -- Whether to create a service account for the Grafana Alloy deployment.
create: true
# -- Additional labels to add to the created service account.
additionalLabels: {}
# -- Annotations to add to the created service account.
annotations: {}
# -- The name of the existing service account to use when
# serviceAccount.create is false.
name: null
# Whether the Alloy pod should automatically mount the service account token.
automountServiceAccountToken: true
# Options for the extra controller used for config reloading.
configReloader:
# -- Enables automatically reloading when the Alloy config changes.
enabled: true
image:
# -- Config reloader image registry (defaults to docker.io)
registry: "quay.io"
# -- Repository to get config reloader image from.
repository: prometheus-operator/prometheus-config-reloader
# -- Tag of image to use for config reloading.
tag: v0.81.0
# -- SHA256 digest of image to use for config reloading (either in format "sha256:XYZ" or "XYZ"). When set, will override `configReloader.image.tag`
digest: ""
# -- Override the args passed to the container.
customArgs: []
# -- Resource requests and limits to apply to the config reloader container.
resources:
requests:
cpu: "10m"
memory: "50Mi"
# -- Security context to apply to the Grafana configReloader container.
securityContext: {}
controller:
# -- Type of controller to use for deploying Grafana Alloy in the cluster.
# Must be one of 'daemonset', 'deployment', or 'statefulset'.
type: 'daemonset'
# -- Number of pods to deploy. Ignored when controller.type is 'daemonset'.
replicas: 1
# -- Annotations to add to controller.
extraAnnotations: {}
# -- Whether to deploy pods in parallel. Only used when controller.type is
# 'statefulset'.
parallelRollout: true
# -- Configures Pods to use the host network. When set to true, the ports that will be used must be specified.
hostNetwork: false
# -- Configures Pods to use the host PID namespace.
hostPID: false
# -- Configures the DNS policy for the pod. https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy
dnsPolicy: ClusterFirst
# -- Termination grace period in seconds for the Grafana Alloy pods.
# The default value used by Kubernetes if unspecifed is 30 seconds.
terminationGracePeriodSeconds: null
# -- Update strategy for updating deployed Pods.
updateStrategy: {}
# -- nodeSelector to apply to Grafana Alloy pods.
nodeSelector: {}
# -- Tolerations to apply to Grafana Alloy pods.
tolerations: []
# -- Topology Spread Constraints to apply to Grafana Alloy pods.
topologySpreadConstraints: []
# -- priorityClassName to apply to Grafana Alloy pods.
priorityClassName: ''
# -- Extra pod annotations to add.
podAnnotations: {}
# -- Extra pod labels to add.
podLabels: {}
# -- PodDisruptionBudget configuration.
podDisruptionBudget:
# -- Whether to create a PodDisruptionBudget for the controller.
enabled: false
# -- Minimum number of pods that must be available during a disruption.
# Note: Only one of minAvailable or maxUnavailable should be set.
minAvailable: null
# -- Maximum number of pods that can be unavailable during a disruption.
# Note: Only one of minAvailable or maxUnavailable should be set.
maxUnavailable: null
# -- Whether to enable automatic deletion of stale PVCs due to a scale down operation, when controller.type is 'statefulset'.
enableStatefulSetAutoDeletePVC: false
autoscaling:
# -- Creates a HorizontalPodAutoscaler for controller type deployment.
# Deprecated: Please use controller.autoscaling.horizontal instead
enabled: false
# -- The lower limit for the number of replicas to which the autoscaler can scale down.
minReplicas: 1
# -- The upper limit for the number of replicas to which the autoscaler can scale up.
maxReplicas: 5
# -- Average CPU utilization across all relevant pods, a percentage of the requested value of the resource for the pods. Setting `targetCPUUtilizationPercentage` to 0 will disable CPU scaling.
targetCPUUtilizationPercentage: 0
# -- Average Memory utilization across all relevant pods, a percentage of the requested value of the resource for the pods. Setting `targetMemoryUtilizationPercentage` to 0 will disable Memory scaling.
targetMemoryUtilizationPercentage: 80
scaleDown:
# -- List of policies to determine the scale-down behavior.
policies: []
# - type: Pods
# value: 4
# periodSeconds: 60
# -- Determines which of the provided scaling-down policies to apply if multiple are specified.
selectPolicy: Max
# -- The duration that the autoscaling mechanism should look back on to make decisions about scaling down.
stabilizationWindowSeconds: 300
scaleUp:
# -- List of policies to determine the scale-up behavior.
policies: []
# - type: Pods
# value: 4
# periodSeconds: 60
# -- Determines which of the provided scaling-up policies to apply if multiple are specified.
selectPolicy: Max
# -- The duration that the autoscaling mechanism should look back on to make decisions about scaling up.
stabilizationWindowSeconds: 0
# -- Configures the Horizontal Pod Autoscaler for the controller.
horizontal:
# -- Enables the Horizontal Pod Autoscaler for the controller.
enabled: false
# -- The lower limit for the number of replicas to which the autoscaler can scale down.
minReplicas: 1
# -- The upper limit for the number of replicas to which the autoscaler can scale up.
maxReplicas: 5
# -- Average CPU utilization across all relevant pods, a percentage of the requested value of the resource for the pods. Setting `targetCPUUtilizationPercentage` to 0 will disable CPU scaling.
targetCPUUtilizationPercentage: 0
# -- Average Memory utilization across all relevant pods, a percentage of the requested value of the resource for the pods. Setting `targetMemoryUtilizationPercentage` to 0 will disable Memory scaling.
targetMemoryUtilizationPercentage: 80
scaleDown:
# -- List of policies to determine the scale-down behavior.
policies: []
# - type: Pods
# value: 4
# periodSeconds: 60
# -- Determines which of the provided scaling-down policies to apply if multiple are specified.
selectPolicy: Max
# -- The duration that the autoscaling mechanism should look back on to make decisions about scaling down.
stabilizationWindowSeconds: 300
scaleUp:
# -- List of policies to determine the scale-up behavior.
policies: []
# - type: Pods
# value: 4
# periodSeconds: 60
# -- Determines which of the provided scaling-up policies to apply if multiple are specified.
selectPolicy: Max
# -- The duration that the autoscaling mechanism should look back on to make decisions about scaling up.
stabilizationWindowSeconds: 0
# -- Configures the Vertical Pod Autoscaler for the controller.
vertical:
# -- Enables the Vertical Pod Autoscaler for the controller.
enabled: false
# -- List of recommenders to use for the Vertical Pod Autoscaler.
# Recommenders are responsible for generating recommendation for the object.
# List should be empty (then the default recommender will generate the recommendation)
# or contain exactly one recommender.
recommenders: []
# recommenders:
# - name: custom-recommender-performance
# -- Configures the resource policy for the Vertical Pod Autoscaler.
resourcePolicy:
# -- Configures the container policies for the Vertical Pod Autoscaler.
containerPolicies:
- containerName: alloy
# -- The controlled resources for the Vertical Pod Autoscaler.
controlledResources:
- cpu
- memory
# -- The controlled values for the Vertical Pod Autoscaler. Needs to be either RequestsOnly or RequestsAndLimits.
controlledValues: "RequestsAndLimits"
# -- The maximum allowed values for the pods.
maxAllowed: {}
# cpu: 200m
# memory: 100Mi
# -- Defines the min allowed resources for the pod
minAllowed: {}
# cpu: 200m
# memory: 100Mi
# -- Configures the update policy for the Vertical Pod Autoscaler.
updatePolicy:
# -- Specifies minimal number of replicas which need to be alive for VPA Updater to attempt pod eviction
# minReplicas: 1
# -- Specifies whether recommended updates are applied when a Pod is started and whether recommended updates
# are applied during the life of a Pod. Possible values are "Off", "Initial", "Recreate", and "Auto".
# updateMode: Auto
# -- Affinity configuration for pods.
affinity: {}
volumes:
# -- Extra volumes to add to the Grafana Alloy pod.
extra: []
# -- volumeClaimTemplates to add when controller.type is 'statefulset'.
volumeClaimTemplates: []
## -- Additional init containers to run.
## ref: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/
##
initContainers: []
# -- Additional containers to run alongside the Alloy container and initContainers.
extraContainers: []
service:
# -- Creates a Service for the controller's pods.
enabled: true
# -- Service type
type: ClusterIP
# -- NodePort port. Only takes effect when `service.type: NodePort`
nodePort: 31128
# -- Cluster IP, can be set to None, empty "" or an IP address
clusterIP: ''
# -- Value for internal traffic policy. 'Cluster' or 'Local'
internalTrafficPolicy: Cluster
annotations: {}
# cloud.google.com/load-balancer-type: Internal
serviceMonitor:
enabled: false
# -- Additional labels for the service monitor.
additionalLabels: {}
# -- Scrape interval. If not set, the Prometheus default scrape interval is used.
interval: ""
# -- MetricRelabelConfigs to apply to samples after scraping, but before ingestion.
# ref: https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api.md#relabelconfig
metricRelabelings: []
# - action: keep
# regex: 'kube_(daemonset|deployment|pod|namespace|node|statefulset).+'
# sourceLabels: [__name__]
# -- Customize tls parameters for the service monitor
tlsConfig: {}
# -- RelabelConfigs to apply to samples before scraping
# ref: https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api.md#relabelconfig
relabelings: []
# - sourceLabels: [__meta_kubernetes_pod_node_name]
# separator: ;
# regex: ^(.*)$
# targetLabel: nodename
# replacement: $1
# action: replace
ingress:
# -- Enables ingress for Alloy (Faro port)
enabled: false
# For Kubernetes >= 1.18 you should specify the ingress-controller via the field ingressClassName
# See https://kubernetes.io/blog/2020/04/02/improvements-to-the-ingress-api-in-kubernetes-1.18/#specifying-the-class-of-an-ingress
# ingressClassName: nginx
# Values can be templated
annotations:
{}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
labels: {}
path: /
faroPort: 12347
# pathType is only for k8s >= 1.1=
pathType: Prefix
hosts:
- chart-example.local
## Extra paths to prepend to every host configuration. This is useful when working with annotation based services.
extraPaths: []
# - path: /*
# backend:
# serviceName: ssl-redirect
# servicePort: use-annotation
## Or for k8s > 1.19
# - path: /*
# pathType: Prefix
# backend:
# service:
# name: ssl-redirect
# port:
# name: use-annotation
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
# -- Extra k8s manifests to deploy
extraObjects: []
# - apiVersion: v1
# kind: Secret
# metadata:
# name: grafana-cloud
# stringData:
# PROMETHEUS_HOST: 'https://prometheus-us-central1.grafana.net/api/prom/push'
# PROMETHEUS_USERNAME: '123456'