安装helm服务以及搭建Helm测试环境

要求

  • You must have Kubernetes installed. We recommend version 1.4.1 or later.
  • You should also have a local configured copy of kubectl.

Helm will figure out where to install Tiller by reading your Kubernetes configuration file (usually $HOME/.kube/config). This is the same file that kubectl uses.

下载

HELM_VERSION=${K8S_VERSION:-"2.5.0"}
HELM="helm-v${HELM_VERSION}-linux-amd64"

curl -L https://storage.googleapis.com/kubernetes-helm/$HELM.tar.gz -o $HELM.tar.gz

tar -xvzf  $HELM.tar.gz -C /tmp

mv /tmp/linux-amd64/helm /usr/local/bin/helm

各release版本:

https://github.com/kubernetes/helm/releases

tiller(helm服务端)

安装

  • in-cluster安装(安装在k8s集群上)

    helm init
    

    正常的话,会在k8s集群的kube-system安装一个tiller pod.
    默认使用的是~/.kube/config中的CurrentContext来指定部署的k8s集群.

    可以通过设置环境变量$KUBECONFIG指定kubectl配置文件以及使用–context指定context来指定部署的集群.

  • local 安装

     /bin/tiller
    

    这种情况下默认会访问kubectl默认配置文件($HOME/.kube/conf)的CurrentContext关联的k8s集群(用于存放数据等等).

    也可以通过$KUBECONFIG来指定连接的k8s集群的配置文件

    必须通知helm不要连接集群上的tiller,而连接到本地安装的tiller.两种方式

    • helm –host=<ip>
    • export HELM_HOST=localhost:44134

指定集群安装说明

As with the rest of the Helm commands, 'helm init' discovers Kubernetes clusters
by reading $KUBECONFIG (default '~/.kube/config') and using the default context.

helm指定特定的kubectl配置中特定的context dev描述的集群去部署:

export KUBECONFIG="/path/to/kubeconfig"
helm init --kube-context="dev"

存储

tiller支持两种存储:

  • memory
  • storage.

无论使用哪种部署方式,这两种存储都可以使用.memory存储在tiller重启后,release等数据会丢失.

现象

执行helm init后,会

root@node01:~# helm init
Creating /root/.helm 
Creating /root/.helm/repository 
Creating /root/.helm/repository/cache 
Creating /root/.helm/repository/local 
Creating /root/.helm/plugins 
Creating /root/.helm/starters 
Creating /root/.helm/repository/repositories.yaml 
$HELM_HOME has been configured at /root/.helm.

Tiller (the helm server side component) has been installed into your Kubernetes Cluster.

在k8s集群kube-system namespace下安装了deployment tiller-deploy和service tiller-deploy.

补充 :

  • 如果执行helm init --client-only,不会安装tiller,只会创建helm home中目录中的文件,并配置$HELM_HOME环境变量
  • 如果$HELM_HOME目录下已经存在欲创建的文件,不会重新创建或者修改.不存在的文件/目录则会创建.

TroubleShooting

Context deadline exceeded

root@node01:~# helm version --debug
[debug] SERVER: "localhost:44134"
Client: &version.Version{SemVer:"v2.5.0", GitCommit:"012cb0ac1a1b2f888144ef5a67b8dab6c2d45be6", GitTreeState:"clean"}
[debug] context deadline exceeded
Error: cannot connect to Tiller

https://github.com/kubernetes/helm/issues/2409
未解决.

尝试了几次,又成功了.

  1. unset HELM_HOST(之前设置了HELM_HOST为127.0.0.1:44134,而且更改了svc tiller-deploy为NodePort)
  2. 卸载后(移除tiller相关svc,deploy以及/root/.helm目录),重新安装
  3. 正常了.

socat not found

root@node01:~# helm version
Client: &version.Version{SemVer:"v2.5.0", GitCommit:"012cb0ac1a1b2f888144ef5a67b8dab6c2d45be6", GitTreeState:"clean"}
E0711 10:09:50.160064   10916 portforward.go:332] an error occurred forwarding 33491 -> 44134: error forwarding port 44134 to pod tiller-deploy-542252878-15h67_kube-system, uid : unable to do port forwarding: socat not found.
Error: cannot connect to Tiller

已解决:
在kubelet node上安装socat即可.https://github.com/kubernetes/helm/issues/966

卸载

  • helm reset将会移除tiller在k8s集群上创建的pod

  • 当出现上面的context deadline exceeded时, helm reset同样会报该错误.执行heml reset -f强制删除k8s集群上的pod.

  • 当要移除helm init创建的目录等数据时,执行helm reset --remove-helm-home

补充

2.5版本安装的tiller,在出现context deadline exceeded时,使用2.4版本的helm执行heml reset --remove-helm-home --force并不能移除tiller创建的pod和配置.这是2.4版本的问题.

测试环境

本地tiller

tiller

wwh@wwh:~/kiongf/go/src/k8s.io/helm/bin$ ./tiller 
[main] 2017/07/26 14:59:54 Starting Tiller v2.5+unreleased (tls=false)
[main] 2017/07/26 14:59:54 GRPC listening on :44134
[main] 2017/07/26 14:59:54 Probes listening on :44135
[main] 2017/07/26 14:59:54 Storage driver is ConfigMap

参考

https://docs.helm.sh/using_helm/#running-tiller-locally

When Tiller is running locally, it will attempt to connect to the Kubernetes cluster that is configured by kubectl. (Run kubectl config view to see which cluster that is.)

  • kubectl config view读的就是~/.kube/config文件

helm

wwh@wwh:~/kiongf/go/src/k8s.io/helm/bin$ export HELM_HOST=localhost:44134
wwh@wwh:~/kiongf/go/src/k8s.io/helm/bin$ ./helm init --client-only
Creating /home/wwh/.helm 
Creating /home/wwh/.helm/repository 
Creating /home/wwh/.helm/repository/cache 
Creating /home/wwh/.helm/repository/local 
Creating /home/wwh/.helm/plugins 
Creating /home/wwh/.helm/starters 
Creating /home/wwh/.helm/cache/archive 
Creating /home/wwh/.helm/repository/repositories.yaml 
$HELM_HOME has been configured at /home/wwh/.helm.
Not installing Tiller due to 'client-only' flag having been set
Happy Helming!

必须要执行helm init --client-only来初始化helm home下的目录结构.否则helm repo list会报以下的错误:

wwh@wwh:~/kiongf/go/src/k8s.io/helm/bin$ ./helm repo list
Error: open /home/wwh/.helm/repository/repositories.yaml: no such file or directory

警告

这种方法如果k8s集群,没有办法测试helm install ./testChart --dry-run类似的命令,
即使通过./tiller -storage=memory配置存储为内存

本地tiller,但指定后端的K8s集群

tiller

在本地运行tiller,但指定后端运行的k8s集群

//指定后端k8s集群的路径,tiller在初始化kube client的时候会使用该配置文件
//作为kube client的配置文件.
  export KUBECONFIG=/tmp/k8sconfig-688597196
  ./tiller

helm

helm还是和之前的一样.

tiller存储测试

实验configmap

# 用local方安装tiller,指定后端k8s集群, 存储为configmap(默认)
 export KUBECONFIG=/tmp/k8sconfig-688597196
 ./tiller 
 
# 启动另一个shell
wwh@wwh:~/kiongf/go/src/k8s.io/helm/bin$ export HELM_HOST=localhost:44134
wwh@wwh:~/kiongf/go/src/k8s.io/helm/bin$ ./helm init --client-only
wwh@wwh:~/kiongf/go/src/k8s.io/helm/bin$ ./helm install stable/wordpress --debug
# release安装成功
wwh@wwh:~/kiongf/go/src/k8s.io/helm/bin$ ./helm list
NAME                REVISION    UPDATED                     STATUS      CHART           NAMESPACE
tinseled-warthog    1           Fri Aug 25 17:13:53 2017    DEPLOYED    wordpress-0.6.8 default
# 查看集群的configmap
wwh@wwh:~/kiongf/go/src/k8s.io/helm/bin$ kubectl --kubeconfig=/tmp/k8sconfig-688597196 get configmap --all-namespaces
NAMESPACE     NAME                                 DATA      AGE
kube-public   cluster-info                         2         6d
kube-system   calico-config                        3         6d
kube-system   extension-apiserver-authentication   6         6d
kube-system   kube-proxy                           1         6d
kube-system   tinseled-warthog.v1                  1         1m
# 删除release, 这时候这个configmap仍然存在
wwh@wwh:~/kiongf/go/src/k8s.io/helm/bin$ ./helm delete tinseled-warthog
release "tinseled-warthog" deleted
wwh@wwh:~/kiongf/go/src/k8s.io/helm/bin$ kubectl --kubeconfig=/tmp/k8sconfig-688597196 get configmap --all-namespaces
NAMESPACE     NAME                                 DATA      AGE
kube-public   cluster-info                         2         6d
kube-system   calico-config                        3         6d
kube-system   extension-apiserver-authentication   6         6d
kube-system   kube-proxy                           1         6d
kube-system   tinseled-warthog.v1       
# 执行helm delete <release> --purge
wwh@wwh:~/kiongf/go/src/k8s.io/helm/bin$ ./helm delete tinseled-warthog --purge
release "tinseled-warthog" deleted
# configmap上的数据已经被清除.
wwh@wwh:~/kiongf/go/src/k8s.io/helm/bin$ kubectl --kubeconfig=/tmp/k8sconfig-688597196 get configmap --all-namespaces
NAMESPACE     NAME                                 DATA      AGE
kube-public   cluster-info                         2         6d
kube-system   calico-config                        3         6d
kube-system   extension-apiserver-authentication   6         6d
kube-system   kube-proxy                           1         6d



    原文作者:我私人的kernel
    原文地址: https://www.jianshu.com/p/0ba2ee3ce248
    本文转自网络文章,转载此文章仅为分享知识,如有侵权,请联系博主进行删除。
点赞