Jeff’s noteKubernetes
How to Repair File System Provided From Kubernetes Ceph Persistent Volume
How to repair file system provided from Kubernetes ceph persistent volume 某日由於底層VM出現硬體故障,整座自建的Kubernetes接受影響, 部署於該環境的應用MySQL無法掛載pv,從pod出現的錯誤訊息如下: MountVolume.SetUp failed for volume "pvc-78cf22fa-d776-43a4-98d7-d594f02ea018" : mount command failed, status: Failure, reason: failed to mount volume /dev/rbd13 [xfs] to /var/lib/kubelet/plugins/ceph.rook.io/rook-ceph/mounts/pvc-78cf22fa-d776-43a4-98d7-d594f02ea018, error mount failed: exit status 32 Mounting command: systemd-run Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/plugins/ceph.rook.io/rook-ceph/mounts/pvc-78cf22fa-d776-43a4-98d7-d594f02ea018 --scope -- mount -t xfs -o rw,defaults /dev/rbd13 /var/lib/kubelet/plugins/ceph.rook.io/rook-ceph/mounts/pvc-78cf22fa-d776-43a4-98d7-d594f02ea018 Output: Running scope as unit run-r2594d6c82152421c8891bfa8761e8c05.scope. mount: mount /dev/rbd13 on /var/lib/kubelet/plugins/ceph.rook.io/rook-ceph/mounts/pvc-78cf22fa-d776-43a4-98d7-d594f02ea018 failed: Structure needs cleaning 處理方式: 透過ceph tool重新掛載該pv的image,試著使用fsck修復
CSI · Ingress · Kubernetes
937 words
5 minutes
Kubernetes Guide for Developer
Environment Before you try to use any client tool connect to Kubernetes, please check the endpoint field “server” of “cluster” credential config file. Please make sure you’re able connect with the kube-apiserver. For instance, example below shown here is “k8s-cluster” apiVersion: v1 kind: Config clusters: - name: team-a-admin@kubernetes cluster: server: 'https://k8s-cluster:8443' certificate-authority-data: >- xxxxx users: - name: team-a-admin user: token: >- xxxxx contexts: - name: team-a-admin@kubernetes context: user: team-a-admin cluster: team-a-admin@kubernetes namespace: team-a current-context: team-a-admin@kubernetes This example is based on self-host Kubernetes v1.18 environment, you machine might not be able to recognize the target,
497 words
3 minutes
Nginx Ingress Controller K8S Guide
Nginx 介紹 Nginx是非同步框架的網頁伺服器,也可以用作反向代理、負載平衡器和HTTP快取。 設置範例 server { listen 80; listen [::]:80; server_name localhost; listen 443 ssl default_server; ssl_certificate /tls/server.crt; ssl_certificate_key /tls/server.key; #charset koi8-r; #access_log /var/log/nginx/host.access.log main; location / { #root /usr/share/nginx/html; #index index.html index.htm; proxy_pass http://192.168.24.100:9090/; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "Upgrade"; proxy_set_header Host $host; proxy_connect_timeout 60s; proxy_read_timeout 300s; proxy_send_timeout 300s; } location /socket.io { proxy_pass http://192.168.24.100:9091/; #Version 1.1 is recommended for use with keepalive connections proxy_http_version 1.1; #WebSocket proxy_set_header Upgrade $http_upgrade; #WebSocket proxy_set_header Connection $connection_upgrade; #WebSocket proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto https; proxy_set_header Cookie $http_cookie; } error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } } 可以透過nginx -s reload讓nginx對新的配置生效
Ingress · Kubernetes · Reverse Proxy
546 words
3 minutes
Load balancer & VIP HAProxy & Keepalived Init HA control plane & worker node sudo kubeadm init --control-plane-endpoint "LOAD_BALANCER_DNS:LOAD_BALANCER_PORT" --upload-certs ... ... You can now join any number of control-plane node by running the following command on each as a root: kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866 --control-plane --certificate-key f8902e114ef118304e561c3ecd4d0b543adc226b7a07f675f56564185ffe0c07 Please note that the certificate-key gives access to cluster sensitive data, keep it secret! As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use kubeadm init phase upload-certs to reload certs afterward. Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866 Networking with Weave Net $ kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')" https://www.weave.works/docs/net/latest/overview/ https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/ Ingress controller with Nginx $ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.3.1/deploy/static/provider/cloud/deploy.yaml https://kubernetes.github.io/ingress-nginx/ Storage Ceph $ git clone --single-branch --branch v1.10.1 https://github.com/rook/rook.git $ cd rook/deploy/examples $ kubectl create -f crds.yaml -f common.yaml -f operator.yaml $ kubectl create -f cluster.yaml https://rook.io/docs/rook/v1.10/Getting-Started/intro/ Monitoring Prometheus, Grafana $ helm repo add prometheus-community https://prometheus-community.github.io/helm-charts $ helm repo update $ helm install -n [NAMESPACE] [RELEASE_NAME] prometheus-community/kube-prometheus-stack https://prometheus.io https://grafana.com/oss/grafana/ Log Loki $ helm repo add grafana https://grafana.github.io/helm-charts $ helm repo update $ helm upgrade --install --namespace=[NAMESPACE] [RELEASE_NAME] grafana/loki-stack https://grafana.com/oss/loki/
CNI · CSI · Kubernetes
209 words
1 minute
Thingsboard Micro service Architecture version v3.5.1 Cloud Infrastructure Environment Azure Kubernetes v1.26.3. Tier: Standard Load Balancer. sku: Standard Azure Application Gateway. Tier: WAF V2 (auto scale instance 1-3) Azure PostgreSQL Flexible Server. version: 14.8 sku: 1 * D2ds_v4, 2vCores, 8GiB RAM, 128GiB 500 IOPS Disk) Azure Cache for Redis. sku: 1 * Premium P1 (6GB Cache Size) Azure Managed Cassandra. version: 4 sku: 9 * D8s_V5 (8 vCPUs, 32GB RAM, 2 * P30 1024GiB, 5000 IOPS, 200 MB/sec Disk)
Distributed System · High Availability · IoT · Kubernetes · Performance Test · Thingsboard
8413 words
40 minutes