Neither one nor Many
Software engineering blog about my projects, geometry, visualization and music.
For this tutorial I'm assuming Kubernetes with Helm + Ingress is already deployed. If not, I still included the commands I used near the end of this article.
My NAS is running at home with Rockstor, and I'm using RAID10 with btrfs. Rockstor has (docker) apps support with the feature called Rock-on's, they also include OwnCloud, but after updating and some other issues with Rockstor at some point my deployment broke. This frustrated me so I've decided to switch to Kubernetes instead.
I use my own cloud (no pun intended) as an alternative over using services owned by Google/Amazon/Apple. When you plan to do the same, just make sure to also make proper backups.
Following the instructions; copy their default values.yaml
(from here). Tweak all the values. It seems important to define a hostname! (If you try accessing the service later via IP address, the webinterface will not accept this.)
helm install --name my-owncloud -f owncloud.yaml stable/owncloud --set rbac.create=true
Notes: owncloud.yaml
is my values.yaml
, and I expect the rbac.create=true
not to be needed but I used it anyway it was left over when copy & pasting another command.. For convenience you can download my owncloud.yaml
.
In my case I made a btrfs share named /mnt2/NAS/kubeowncloudstorage
.
Then created three folders inside it:
mkdir -p /mnt2/NAS/kubeowncloudstorage/data
mkdir -p /mnt2/NAS/kubeowncloudstorage/mariadb
mkdir -p /mnt2/NAS/kubeowncloudstorage/apache
Set the right permissions for these folders, owncloud will write as user id(1).
chown 1:1 /mnt2/NAS/kubeowncloudstorage -R
Then apply the following yaml (kubectl apply -f kube_owncloud_storage.yaml
):
nas:/root # cat kube_owncloud_storage.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
name: kube-owncloud-storage-data
labels:
type: local
spec:
capacity:
storage: 3072Gi
storageClassName: owncloud-storage-data
accessModes:
- ReadWriteOnce
hostPath:
path: /mnt2/NAS/kubeowncloudstorage/data
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: kube-owncloud-storage-mariadb
labels:
type: local
spec:
capacity:
storage: 8Gi
storageClassName: owncloud-storage-mariadb
accessModes:
- ReadWriteOnce
hostPath:
path: /mnt2/NAS/kubeowncloudstorage/mariadb
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: kube-owncloud-storage-apache
labels:
type: local
spec:
capacity:
storage: 1Gi
storageClassName: owncloud-storage-apache
accessModes:
- ReadWriteOnce
hostPath:
path: /mnt2/NAS/kubeowncloudstorage/apache
If you redeploy Kubernetes and/or the system in general, I forgot when exactly but a PersistentVolume may end up in a state that prevents PersistentVolumeClaim's to not bind to the Volumes.
There was a trick to force it to bind, IIRC kubectl edit pv kube-owncloud-storage-data
and you can remove the reference it has to an existing PVC. But it was a few weeks ago I experimented with this so sorry I don't remember the details.
Only now I stumbled upon my notes and decided to wrap it up in a blog post.
nas:/root # cat owncloud_ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
ingress.kubernetes.io/proxy-body-size: 500m
nginx.ingress.kubernetes.io/proxy-body-size: 500m
name: owncloud
namespace: default
spec:
rules:
- host: ******DOMAIN NAME*******
http:
paths:
- backend:
serviceName: my-owncloud-owncloud
servicePort: 80
path: /
Take a careful look at these two options in the annotations:
ingress.kubernetes.io/proxy-body-size: 500m
nginx.ingress.kubernetes.io/proxy-body-size: 500m
They took me two hours of debugging, owncloud was throwing errors 413 Request Entity Too Large when syncing some larger video files from my phone to owncloud. Thinking this must be an issue inside owncloud I experimented with lots of parameters, fixes for php, apache, etc. Then realized it could be the Ingress in Kubernetes. The above example makes sure it doesn't block uploads up to half a gigabyte.
The end result should look something like this in Kubernetes:
nas:/root # kubectl get all
NAME READY STATUS RESTARTS AGE
pod/my-nginx-nginx-ingress-controller-664f4547d8-vjgkt 1/1 Running 0 16d
pod/my-nginx-nginx-ingress-default-backend-5bcb65f5f4-qrwcd 1/1 Running 0 16d
pod/my-owncloud-mariadb-0 1/1 Running 0 16d
pod/my-owncloud-owncloud-6cddfdc8f4-hmrh5 1/1 Running 2 16d
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 16d
service/my-nginx-nginx-ingress-controller LoadBalancer 10.103.57.37 192.168.2.122 80:32030/TCP,443:30453/TCP 16d
service/my-nginx-nginx-ingress-default-backend ClusterIP 10.101.16.224 <none> 80/TCP 16d
service/my-owncloud-mariadb ClusterIP 10.104.48.71 <none> 3306/TCP 16d
service/my-owncloud-owncloud LoadBalancer 10.102.95.4 <pending> 80:32287/TCP 16d
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/my-nginx-nginx-ingress-controller 1 1 1 1 16d
deployment.apps/my-nginx-nginx-ingress-default-backend 1 1 1 1 16d
deployment.apps/my-owncloud-owncloud 1 1 1 1 16d
NAME DESIRED CURRENT READY AGE
replicaset.apps/my-nginx-nginx-ingress-controller-664f4547d8 1 1 1 16d
replicaset.apps/my-nginx-nginx-ingress-default-backend-5bcb65f5f4 1 1 1 16d
replicaset.apps/my-owncloud-owncloud-6cddfdc8f4 1 1 1 16d
NAME DESIRED CURRENT AGE
statefulset.apps/my-owncloud-mariadb 1 1 16d
nas:/root # kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
owncloud ***************** 80 16d
nas:/root # kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
kube-owncloud-storage-apache 1Gi RWO Retain Bound default/my-owncloud-owncloud-apache owncloud-storage-apache 16d
kube-owncloud-storage-data 3Ti RWO Retain Bound default/my-owncloud-owncloud-owncloud owncloud-storage-data 16d
kube-owncloud-storage-mariadb 8Gi RWO Retain Bound default/data-my-owncloud-mariadb-0 owncloud-storage-mariadb 16d
nas:/root # kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data-my-owncloud-mariadb-0 Bound kube-owncloud-storage-mariadb 8Gi RWO owncloud-storage-mariadb 16d
my-owncloud-owncloud-apache Bound kube-owncloud-storage-apache 1Gi RWO owncloud-storage-apache 16d
my-owncloud-owncloud-owncloud Bound kube-owncloud-storage-data 3Ti RWO owncloud-storage-data 16d
Just in case you are also attempting to install Kubernetes for the first time, a reference of the commands used in my setup. First I followed the official docs to deploy kubeadm,kubelet etc. See here.
My init looked like this:
kubeadm init --pod-network-cidr=192.168.0.0/16
At this point you may get some errors, and you have to fix the errors, maybe even kubeadm reset
and then retry.
Until I was okay with the remaining errors, I proceeded with:
kubeadm init --pod-network-cidr=192.168.0.0/16 --ignore-preflight-errors=all
# these steps will be recommended from above command:
mkdir -p $HOME/.kube
sudo cp -f /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
# I chose calico for networking
kubectl apply -f https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml
kubectl apply -f https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
# Then after a while (maybe check if kubelet etc. come up correctly, try "kubectl get no")
# Make sure the master node is not excluded for running pods.
kubectl taint nodes --all node-role.kubernetes.io/master-
# I also executed this patch, but I think it's not needed anymore, it was still in my helper script
kubectl -n kube-system get deployment coredns -o yaml | sed 's/allowPrivilegeEscalation: false/allowPrivilegeEscalation: true/g' | kubectl apply -f -
# Then I looked up the kubelet service file with `systemctl cat kubelet` and edited:
vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
# added this to above file, the --resolv-conf:
#
#ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS --resolv-conf=/etc/resolv.conf
#
#ALSO: I edited /etc/resolv.conf, I removed the ipv6 nameserver entry, and added 8.8.8.8 as per https://hk.saowen.com/a/e6cffc1e02c2b4643bdd525ff9e8e4cfb49a4790062508dca478c0c8a0361b5a
systemctl daemon-reload
systemctl restart kubelet
kubectl get pod -n kube-system
kubectl delete pod coredns-68fb79bcf6-9zdtz -n kube-system
kubectl delete pod coredns-68fb79bcf6-t7vsm -n kube-system
kubectl get pod -n kube-system -o wide
Solution for the last bit I got from here. However this may have been a random issue that I just ran into, because on different servers I don't recall I had to the steps regarding coredns.
Possible commands
helm reset --force
helm init --upgrade --service-account tiller
# don't remember if these two commands were still necessary
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
Links for solutions for problems that I ran into at some point in time:
Links that eventually pointed me in the right direction for the 413 Request Entity Too Large
error.
Ray Burgemeestre
2018-11-09 15:35:11
`kubectl edit service/my-nginx-nginx-ingress-controller`
and add externalIPs...
<pre>
spec:
clusterIP: 10.99.151.43
externalIPs:
- *** EXTERNAL IP TO BIND HERE ***
externalTrafficPolicy: Cluster
ports:
</pre>