This commit is contained in:
2023-02-21 14:35:39 +07:00
parent 9f310557f3
commit 6b5474a48d
30 changed files with 276 additions and 26 deletions

View File

@@ -4,16 +4,106 @@
* [helm](https://helm.sh)
План действий:
## План действий
1. В чем разница между RKE <> RKE2
2. Снапшотим наши ноды, чтоб можно было откатиться к началу
3. Запускаем RKE2 через комм строку
4. Тестируем кластер в линзе
5. Восстанавливаемся из снепшотов
6. Запускаем +1 ВМ для ранчера, ставим на ней docker
7. Поднимаем rancher
8. Разворачиваем кластер rke через rancher, проверяем его в линзе
9. Запускаем контейнер для NFS
10. Запускаем в кластере SNI для NFS-сервера
11. Подключаем метрики от линзы
## Выполнение
на первой ноде:
```bash
curl -sfL https://get.rke2.io | sh -
systemctl enable --now rke2-server.service
```
Подключаем "кластер" к линзе, взяв конфиг из `/etc/rancher/rke2/rke2.yaml` и поправив там
верный ip-адрес.
на второи и третьей:
```bash
mkdir -p /etc/rancher/rke2/ && nano /etc/rancher/rke2/config.yaml
```
и заполняем его конфигом:
(токен берем из первой ноды в файле `/var/lib/rancher/rke2/server/node-token`)
```yaml
server: https://192.168.20.2:9345
token: K10a251c5c4fc92ca9d08d3aa534e232bd9cf1f6c192d42551c98020ffec17464cc::server:1b306239dd7e878eda35084137aa9626
tls-san:
- rke.bildme.ru
node-taint:
- "CriticalAddonsOnly=true:NoExecute"
```
## Траефик
запускаем траефик, балансирую только на первую cp-ноду
траефик у меня будет висеть на адресе 192.168.20.2, от сюда этот адрес указан и в конфиге в разделе server
## Подключаем 2\3 ноды
активируем сервис, ждем подключения нод в кластер
`systemctl enable --now rke2-server.service`
## Правим подключение
1. идем на первую ноду, так же создаем конфиг, сохраняем и перезапускаем машину
2. перенастраиваем подключение в линзе по доменному имени
3. открываем балансировку в траефике на все три ноды
## Добавляем воркер-ноду
1. качаем бинари - `curl -sfL https://get.rke2.io | INSTALL_RKE2_TYPE="agent" sh -`
2. создаем более простой конфиг:
```bash
mkdir -p /etc/rancher/rke2/ && nano /etc/rancher/rke2/config.yaml
```
заполняем:
```yaml
server: https://192.168.20.2:9345
token: K10a251c5c4fc92ca9d08d3aa534e232bd9cf1f6c192d42551c98020ffec17464cc::server:1b306239dd7e878eda35084137aa9626
```
запускаем службу:
`systemctl enable --now rke2-agent.service`
ждем адопта, затем добавляем в лейбы ноды значение (исключительно для красоты):
`node-role.kubernetes.io/worker: 'true'`
## Со стороны кластера
1. поправим taints на первой ноде, так, что бы на нее не пошла боевая нагрузка
2. поправим nginx ingress controller, что бы он работал только на CP-нодах
меняем селектор нод:
```yaml
nodeSelector:
node-role.kubernetes.io/control-plane: 'true'
```
добавляем разрешение на запуск на CP-нодах:
```yaml
tolerations:
- key: CriticalAddonsOnly
operator: Exists
- operator: Exists
effect: NoExecute
```
проверяем, что у нас запущены 3 реплики rke2-ingress-nginx-controller

View File

@@ -1,16 +0,0 @@
#!/bin/bash
ssh node1 apt install -y nfs-common
ssh node1 systemctl enable rpcbind
ssh node1 systemctl start rpcbind
ssh node2 apt install -y nfs-common
ssh node2 systemctl enable rpcbind
ssh node2 systemctl start rpcbind
ssh node3 apt install -y nfs-common
ssh node3 systemctl enable rpcbind
ssh node3 systemctl start rpcbind
#helm upgrade --install --create-namespace nfs-provisioner -n nfs-provisioner nfs-provisioner

View File

@@ -1,13 +0,0 @@
apiVersion: v1
appVersion: 4.0.2
description: nfs-subdir-external-provisioner is an automatic provisioner that used your *already configured* NFS server, automatically creating Persistent Volumes.
name: nfs-subdir-external-provisioner
home: https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner
version: 4.0.7
kubeVersion: ">=1.9.0-0"
sources:
- https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner
keywords:
- nfs
- storage
- provisioner

View File

@@ -1,79 +0,0 @@
# NFS Subdirectory External Provisioner Helm Chart
The [NFS subdir external provisioner](https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner) is an automatic provisioner for Kubernetes that uses your *already configured* NFS server, automatically creating Persistent Volumes.
## TL;DR;
```console
$ helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
$ helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
--set nfs.server=x.x.x.x \
--set nfs.path=/exported/path
```
## Introduction
This charts installs custom [storage class](https://kubernetes.io/docs/concepts/storage/storage-classes/) into a [Kubernetes](http://kubernetes.io) cluster using the [Helm](https://helm.sh) package manager. It also installs a [NFS client provisioner](https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner) into the cluster which dynamically creates persistent volumes from single NFS share.
## Prerequisites
- Kubernetes >=1.9
- Existing NFS Share
## Installing the Chart
To install the chart with the release name `my-release`:
```console
$ helm install my-release nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
--set nfs.server=x.x.x.x \
--set nfs.path=/exported/path
```
The command deploys the given storage class in the default configuration. It can be used afterswards to provision persistent volumes. The [configuration](#configuration) section lists the parameters that can be configured during installation.
> **Tip**: List all releases using `helm list`
## Uninstalling the Chart
To uninstall/delete the `my-release` deployment:
```console
$ helm delete my-release
```
The command removes all the Kubernetes components associated with the chart and deletes the release.
## Configuration
The following tables lists the configurable parameters of this chart and their default values.
| Parameter | Description | Default |
| ----------------------------------- | ----------------------------------------------------------- | ------------------------------------------------- |
| `replicaCount` | Number of provisioner instances to deployed | `1` |
| `strategyType` | Specifies the strategy used to replace old Pods by new ones | `Recreate` |
| `image.repository` | Provisioner image | `gcr.io/k8s-staging-sig-storage/nfs-subdir-external-provisioner` |
| `image.tag` | Version of provisioner image | `v4.0.2` |
| `image.pullPolicy` | Image pull policy | `IfNotPresent` |
| `storageClass.name` | Name of the storageClass | `nfs-client` |
| `storageClass.defaultClass` | Set as the default StorageClass | `false` |
| `storageClass.allowVolumeExpansion` | Allow expanding the volume | `true` |
| `storageClass.reclaimPolicy` | Method used to reclaim an obsoleted volume | `Delete` |
| `storageClass.provisionerName` | Name of the provisionerName | null |
| `storageClass.archiveOnDelete` | Archive PVC when deleting | `true` |
| `storageClass.onDelete` | Strategy on PVC deletion. Overrides `archiveOnDelete` when set to lowercase values `delete` or `retain` | null |
| `storageClass.pathPattern` | Specifies a template for the directory name | null |
| `storageClass.accessModes` | Set access mode for PV | `ReadWriteOnce` |
| `leaderElection.enabled` | Enables or disables leader election | `true` |
| `nfs.server` | Hostname of the NFS server (required) | null (ip or hostname) |
| `nfs.path` | Basepath of the mount point to be used | `/nfs-storage` |
| `nfs.mountOptions` | Mount options (e.g. 'nfsvers=3') | null |
| `resources` | Resources required (e.g. CPU, memory) | `{}` |
| `rbac.create` | Use Role-based Access Control | `true` |
| `podSecurityPolicy.enabled` | Create & use Pod Security Policy resources | `false` |
| `priorityClassName` | Set pod priorityClassName | null |
| `serviceAccount.create` | Should we create a ServiceAccount | `true` |
| `serviceAccount.name` | Name of the ServiceAccount to use | null |
| `nodeSelector` | Node labels for pod assignment | `{}` |
| `affinity` | Affinity settings | `{}` |
| `tolerations` | List of node taints to tolerate | `[]` |

View File

@@ -1,5 +0,0 @@
nfs:
server: 127.0.0.1
podSecurityPolicy:
enabled: true
buildMode: true

View File

@@ -1,62 +0,0 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "nfs-subdir-external-provisioner.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "nfs-subdir-external-provisioner.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "nfs-subdir-external-provisioner.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- define "nfs-subdir-external-provisioner.provisionerName" -}}
{{- if .Values.storageClass.provisionerName -}}
{{- printf .Values.storageClass.provisionerName -}}
{{- else -}}
cluster.local/{{ template "nfs-subdir-external-provisioner.fullname" . -}}
{{- end -}}
{{- end -}}
{{/*
Create the name of the service account to use
*/}}
{{- define "nfs-subdir-external-provisioner.serviceAccountName" -}}
{{- if .Values.serviceAccount.create -}}
{{ default (include "nfs-subdir-external-provisioner.fullname" .) .Values.serviceAccount.name }}
{{- else -}}
{{ default "default" .Values.serviceAccount.name }}
{{- end -}}
{{- end -}}
{{/*
Return the appropriate apiVersion for podSecurityPolicy.
*/}}
{{- define "podSecurityPolicy.apiVersion" -}}
{{- if semverCompare ">=1.10-0" .Capabilities.KubeVersion.GitVersion -}}
{{- print "policy/v1beta1" -}}
{{- else -}}
{{- print "extensions/v1beta1" -}}
{{- end -}}
{{- end -}}

View File

@@ -1,30 +0,0 @@
{{- if .Values.rbac.create }}
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
app: {{ template "nfs-subdir-external-provisioner.name" . }}
chart: {{ template "nfs-subdir-external-provisioner.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
name: {{ template "nfs-subdir-external-provisioner.fullname" . }}-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
{{- if .Values.podSecurityPolicy.enabled }}
- apiGroups: ['extensions']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames: [{{ template "nfs-subdir-external-provisioner.fullname" . }}]
{{- end }}
{{- end }}

View File

@@ -1,19 +0,0 @@
{{- if .Values.rbac.create }}
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
app: {{ template "nfs-subdir-external-provisioner.name" . }}
chart: {{ template "nfs-subdir-external-provisioner.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
name: run-{{ template "nfs-subdir-external-provisioner.fullname" . }}
subjects:
- kind: ServiceAccount
name: {{ template "nfs-subdir-external-provisioner.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
roleRef:
kind: ClusterRole
name: {{ template "nfs-subdir-external-provisioner.fullname" . }}-runner
apiGroup: rbac.authorization.k8s.io
{{- end }}

View File

@@ -1,81 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "nfs-subdir-external-provisioner.fullname" . }}
labels:
app: {{ template "nfs-subdir-external-provisioner.name" . }}
chart: {{ template "nfs-subdir-external-provisioner.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
spec:
replicas: {{ .Values.replicaCount }}
strategy:
type: {{ .Values.strategyType }}
selector:
matchLabels:
app: {{ template "nfs-subdir-external-provisioner.name" . }}
release: {{ .Release.Name }}
template:
metadata:
annotations:
{{- if and (.Values.tolerations) (semverCompare "<1.6-0" .Capabilities.KubeVersion.GitVersion) }}
scheduler.alpha.kubernetes.io/tolerations: '{{ toJson .Values.tolerations }}'
{{- end }}
labels:
app: {{ template "nfs-subdir-external-provisioner.name" . }}
release: {{ .Release.Name }}
spec:
serviceAccountName: {{ template "nfs-subdir-external-provisioner.serviceAccountName" . }}
{{- if .Values.nodeSelector }}
nodeSelector:
{{ toYaml .Values.nodeSelector | indent 8 }}
{{- end }}
{{- if .Values.affinity }}
affinity:
{{ toYaml .Values.affinity | indent 8 }}
{{- end }}
{{- if .Values.priorityClassName }}
priorityClassName: {{ .Values.priorityClassName | quote }}
{{- end }}
{{- if .Values.imagePullSecrets }}
imagePullSecrets:
{{ toYaml .Values.imagePullSecrets | indent 8 }}
{{- end }}
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
volumeMounts:
- name: nfs-subdir-external-provisioner-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: {{ template "nfs-subdir-external-provisioner.provisionerName" . }}
- name: NFS_SERVER
value: {{ .Values.nfs.server }}
- name: NFS_PATH
value: {{ .Values.nfs.path }}
{{- if eq .Values.leaderElection.enabled false }}
- name: ENABLE_LEADER_ELECTION
value: "false"
{{- end }}
{{- with .Values.resources }}
resources:
{{ toYaml . | indent 12 }}
{{- end }}
volumes:
- name: nfs-subdir-external-provisioner-root
{{- if .Values.buildMode }}
emptyDir: {}
{{- else if .Values.nfs.mountOptions }}
persistentVolumeClaim:
claimName: pvc-{{ template "nfs-subdir-external-provisioner.fullname" . }}
{{- else }}
nfs:
server: {{ .Values.nfs.server }}
path: {{ .Values.nfs.path }}
{{- end }}
{{- if and (.Values.tolerations) (semverCompare "^1.6-0" .Capabilities.KubeVersion.GitVersion) }}
tolerations:
{{ toYaml .Values.tolerations | indent 6 }}
{{- end }}

View File

@@ -1,25 +0,0 @@
{{ if .Values.nfs.mountOptions -}}
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-{{ template "nfs-subdir-external-provisioner.fullname" . }}
labels:
nfs-subdir-external-provisioner: {{ template "nfs-subdir-external-provisioner.fullname" . }}
spec:
capacity:
storage: 10Mi
volumeMode: Filesystem
accessModes:
- {{ .Values.storageClass.accessModes }}
persistentVolumeReclaimPolicy: {{ .Values.storageClass.reclaimPolicy }}
storageClassName: ""
{{- if .Values.nfs.mountOptions }}
mountOptions:
{{- range .Values.nfs.mountOptions }}
- {{ . }}
{{- end }}
{{- end }}
nfs:
server: {{ .Values.nfs.server }}
path: {{ .Values.nfs.path }}
{{ end -}}

View File

@@ -1,17 +0,0 @@
{{ if .Values.nfs.mountOptions -}}
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-{{ template "nfs-subdir-external-provisioner.fullname" . }}
spec:
accessModes:
- {{ .Values.storageClass.accessModes }}
volumeMode: Filesystem
storageClassName: ""
selector:
matchLabels:
nfs-subdir-external-provisioner: {{ template "nfs-subdir-external-provisioner.fullname" . }}
resources:
requests:
storage: 10Mi
{{ end -}}

View File

@@ -1,31 +0,0 @@
{{- if .Values.podSecurityPolicy.enabled }}
apiVersion: {{ template "podSecurityPolicy.apiVersion" . }}
kind: PodSecurityPolicy
metadata:
name: {{ template "nfs-subdir-external-provisioner.fullname" . }}
labels:
app: {{ template "nfs-subdir-external-provisioner.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
spec:
privileged: false
allowPrivilegeEscalation: false
requiredDropCapabilities:
- ALL
volumes:
- 'secret'
- 'nfs'
hostNetwork: false
hostIPC: false
hostPID: false
runAsUser:
rule: 'RunAsAny'
seLinux:
rule: 'RunAsAny'
supplementalGroups:
rule: 'RunAsAny'
fsGroup:
rule: 'RunAsAny'
readOnlyRootFilesystem: false
{{- end }}

View File

@@ -1,21 +0,0 @@
{{- if .Values.rbac.create }}
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
app: {{ template "nfs-subdir-external-provisioner.name" . }}
chart: {{ template "nfs-subdir-external-provisioner.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
name: leader-locking-{{ template "nfs-subdir-external-provisioner.fullname" . }}
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
{{- if .Values.podSecurityPolicy.enabled }}
- apiGroups: ['extensions']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames: [{{ template "nfs-subdir-external-provisioner.fullname" . }}]
{{- end }}
{{- end }}

View File

@@ -1,19 +0,0 @@
{{- if .Values.rbac.create }}
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
app: {{ template "nfs-subdir-external-provisioner.name" . }}
chart: {{ template "nfs-subdir-external-provisioner.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
name: leader-locking-{{ template "nfs-subdir-external-provisioner.fullname" . }}
subjects:
- kind: ServiceAccount
name: {{ template "nfs-subdir-external-provisioner.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
roleRef:
kind: Role
name: leader-locking-{{ template "nfs-subdir-external-provisioner.fullname" . }}
apiGroup: rbac.authorization.k8s.io
{{- end }}

View File

@@ -1,11 +0,0 @@
{{ if .Values.serviceAccount.create }}
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app: {{ template "nfs-subdir-external-provisioner.name" . }}
chart: {{ template "nfs-subdir-external-provisioner.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
name: {{ template "nfs-subdir-external-provisioner.serviceAccountName" . }}
{{- end -}}

View File

@@ -1,32 +0,0 @@
{{ if .Values.storageClass.create -}}
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
labels:
app: {{ template "nfs-subdir-external-provisioner.name" . }}
chart: {{ template "nfs-subdir-external-provisioner.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
name: {{ .Values.storageClass.name }}
{{- if .Values.storageClass.defaultClass }}
annotations:
storageclass.kubernetes.io/is-default-class: "true"
{{- end }}
provisioner: {{ template "nfs-subdir-external-provisioner.provisionerName" . }}
allowVolumeExpansion: {{ .Values.storageClass.allowVolumeExpansion }}
reclaimPolicy: {{ .Values.storageClass.reclaimPolicy }}
parameters:
archiveOnDelete: "{{ .Values.storageClass.archiveOnDelete }}"
{{- if .Values.storageClass.pathPattern }}
pathPattern: "{{ .Values.storageClass.pathPattern }}"
{{- end }}
{{- if .Values.storageClass.onDelete }}
onDelete: "{{ .Values.storageClass.onDelete }}"
{{- end }}
{{- if .Values.nfs.mountOptions }}
mountOptions:
{{- range .Values.nfs.mountOptions }}
- {{ . }}
{{- end }}
{{- end }}
{{ end -}}

View File

@@ -1,87 +0,0 @@
replicaCount: 1
strategyType: Recreate
image:
repository: gcr.io/k8s-staging-sig-storage/nfs-subdir-external-provisioner
tag: v4.0.2
pullPolicy: IfNotPresent
nfs:
server: 192.168.9.99
path: /mnt/data
mountOptions:
# For creating the StorageClass automatically:
storageClass:
create: true
# Set a provisioner name. If unset, a name will be generated.
# provisionerName:
# Set StorageClass as the default StorageClass
# Ignored if storageClass.create is false
defaultClass: true
# Set a StorageClass name
# Ignored if storageClass.create is false
name: nfs-client
# Allow volume to be expanded dynamically
allowVolumeExpansion: true
# Method used to reclaim an obsoleted volume
reclaimPolicy: Delete
# When set to false your PVs will not be archived by the provisioner upon deletion of the PVC.
archiveOnDelete: false
# If it exists and has 'delete' value, delete the directory. If it exists and has 'retain' value, save the directory.
# Overrides archiveOnDelete.
# Ignored if value not set.
onDelete:
# Specifies a template for creating a directory path via PVC metadata's such as labels, annotations, name or namespace.
# Ignored if value not set.
pathPattern:
# Set access mode - ReadWriteOnce, ReadOnlyMany or ReadWriteMany
accessModes: ReadWriteOnce
leaderElection:
# When set to false leader election will be disabled
enabled: true
## For RBAC support:
rbac:
# Specifies whether RBAC resources should be created
create: true
# If true, create & use Pod Security Policy resources
# https://kubernetes.io/docs/concepts/policy/pod-security-policy/
podSecurityPolicy:
enabled: false
## Set pod priorityClassName
# priorityClassName: ""
serviceAccount:
# Specifies whether a ServiceAccount should be created
create: true
# The name of the ServiceAccount to use.
# If not set and create is true, a name is generated using the fullname template
name:
resources: {}
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
nodeSelector: {}
tolerations: []
affinity: {}

View File

@@ -1,12 +0,0 @@
#!/bin/bash
# ubuntu 20.04
FOLDER=/mnt/data
apt update && apt install -y nfs-kernel-server
chown nobody:nogroup $FOLDER
echo "$FOLDER 192.168.9.0/24(rw,sync,no_subtree_check,no_root_squash)" >> /etc/exports
systemctl restart nfs-kernel-server

View File

@@ -0,0 +1 @@
HOSTNAME=traefik.domain.ru

1
3.RKE2/traefik/.gitignore vendored Normal file
View File

@@ -0,0 +1 @@
.env

0
3.RKE2/traefik/.gitkeep Normal file
View File

View File

@@ -0,0 +1,17 @@
http:
routers:
app-domain-ru-route:
entryPoints:
- https
service: app-domain-ru-service
rule: Host(`$APP_HOSTNAME`)
tls:
certResolver: letsEncrypt
services:
app-domain-ru-service:
loadBalancer:
passHostHeader: true
servers:
- url: http://192.168.20.101
- url: http://192.168.20.102
- url: http://192.168.20.103

View File

@@ -0,0 +1,29 @@
tcp:
routers:
rke2-api:
entryPoints:
- k8s-api
rule: "HostSNI(`*`)"
service: rke2-api-service
tls:
passthrough: true
rke2-connect:
entryPoints:
- rke2-connect
rule: "HostSNI(`*`)"
service: rke2-connect-service
tls:
passthrough: true
services:
rke2-api-service:
loadBalancer:
servers:
- address: 192.168.20.101:6443
# - address: 192.168.20.102:6443
# - address: 192.168.20.103:6443
rke2-connect-service:
loadBalancer:
servers:
- address: 192.168.20.101:9345
# - address: 192.168.20.102:9345
# - address: 192.168.20.103:9345

View File

@@ -0,0 +1,39 @@
global:
checkNewVersion: true
log:
level: error
filePath: /data/stdout.log
format: common
serversTransport:
insecureSkipVerify: true
api:
dashboard: true
entryPoints:
http:
address: ":80"
https:
address: ":443"
k8s-api:
address: ":6443"
rke2-connect:
address: ":9345"
providers:
docker:
endpoint: "unix:///var/run/docker.sock"
exposedByDefault: false
file:
directory: /custom
watch: true
certificatesResolvers:
letsEncrypt:
acme:
email: mail@gmail.com
storage: acme.json
httpChallenge:
entryPoint: http

View File

@@ -0,0 +1,44 @@
version: '3.9'
services:
traefik:
image: traefik
container_name: traefik
restart: unless-stopped
security_opt:
- no-new-privileges:true
ports:
- 80:80
- 443:443
- 6443:6443
- 9345:9345
extra_hosts:
kubernetes.default: 127.0.0.1
cap_add:
- NET_BIND_SERVICE
volumes:
- /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./data/traefik.yml:/traefik.yml:ro
- ./data/custom/:/custom/:ro
- ./data/acme.json:/acme.json
- ./logs/stdout.log:/data/stdout.log:rw
- ./logs/access.log:/data/access.log:rw
labels:
- "traefik.enable=true"
- "traefik.http.routers.traefik.entrypoints=https"
- "traefik.http.routers.traefik.rule=Host(`$HOSTNAME`)"
- "traefik.http.routers.traefik.tls=true"
- "traefik.http.routers.traefik.tls.certresolver=letsEncrypt"
- "traefik.http.routers.traefik.service=api@internal"
- "traefik.http.services.traefik-traefik.loadbalancer.server.port=888"
- "traefik.http.middlewares.redirect-to-https.redirectscheme.scheme=https"
# global redirect to https
- "traefik.http.routers.http-catchall.rule=hostregexp(`{host:.+}`)"
- "traefik.http.routers.http-catchall.entrypoints=http"
- "traefik.http.routers.http-catchall.middlewares=redirect-to-https"
networks:
- webproxy
networks:
webproxy:
name: webproxy

7
3.RKE2/traefik/init.sh Normal file
View File

@@ -0,0 +1,7 @@
#!/bin/bash
touch data/acme.json
chmod 600 data/acme.json
touch logs/stdout.log
touch logs/access.log