kubernetes应用包管理工具(Helm)

一,Helm基础概述

1,使用Helm的目的?

由之前的应用部署过程中可知,在kubernetes 系统上部署容器化应用时需要事先手动编写资源配置清单文件以定义资源对象,而且其每一次的配置定义基本上都是硬编码,基本上无法实现复用。对于较大规模的应用场景,应用程序的配置,分发,版本控制,查找,回滚甚至是查看都将是用户的噩梦。 Helm可大大简化应用管理的难度。

成都创新互联公司主要从事成都网站设计、成都网站制作、网页设计、企业做网站、公司建网站等业务。立足成都服务大峪,十载网站建设经验,价格优惠、服务专业,欢迎来电咨询建站服务:18982081108

2,Helm是什么?

简单来说,Helm就是kubernetes的应用程序包管理器,类似于Linux系统上的 yum 或 apt-get 等,可用于实现帮助用户查找,分享及使用kubernetes应用程序,目前的版本由CNCF(Microsoft,Google,Bitnami 和 Helm 社区) 维护。它的核心打包功能组件称为chart, 可以帮助用户创建,安装及升级复杂应用。

Helm将kubernetes资源(Deployment,service或configmap等)打包到一个charts中,制作并测试完成的各个charts 将保存到charts仓库进行存储和分发。另外Helm实现了可配置的发布,它支持应用配置的版本管理,简化了kubernetes 部署应用的版本控制,打包,发布,删除和更新操作。Helm架构组件如下图所示:
kubernetes应用包管理工具(Helm)

3,Helm的优点?

  • 管理复杂应用: charts能够描述哪怕是最复杂的程序结构,其提供了可重复使用的应用安装的定义。
  • 易于升级: 使用就地升级和自定义钩子来解决更新的难题。
  • 简单分享: charts易于通过公共或私用服务完成版本化,分享及主机构建。
  • 回滚:可使用 “helm rollback” 命令轻松实现快速回滚。

4,Helm的核心术语

对与Heml来说,它具有以下几个关键概念:

  • Charts:即一个Helm程序包,它包含了运行一个kubernetes应用所需要的镜像,依赖关系和资源定义等,必要时还会包含service的定义;它类似于APT的dpkg文件或者 yum 的 rpm文件。
  • Repository:Charts仓库,用于集中存储和分发Charts,类似于Perl的CPAN,或者python的pyPI。
  • Config: 应用程序实例化安装运行使用的配置信息。
  • Release: 应用程序实例化配置后运行与kubernetes集群中的一个Charts实例;在同一个集群上,一个charts 可以使用不同的Config重复安装多次,每次安装都会创建一个新的Release。

5,Helm架构

Helm主要由Helm客户端,Tiller服务器和Charts仓库(Repository)组成。Helm 成员间通信图如下:
kubernetes应用包管理工具(Helm)
Heml客户端:Helm客户端是命令行客户端工具,采用Go语言编写,基于gRPC协议与Tiller server交互,它主要完成如下任务:

  • 本地 charts开发。
  • 管理Charts仓库。
  • 与Tiller服务器交互(发送Charts以安装,查询release的相关信息以及升级或卸载已有的Release)。

Tiller server:Tiller server是运行与kubernetes集群之中的容器化服务应用,它接收来自Helm客户端的请求,并在必要时与kubernetes APi server进行交互,它主要完成以下任务:

  • 监听来自于Helm客户端的请求。
  • 合并charts 和配置以构建一个Release。
  • 向kubernetes 记者安装Charts并对相应的Release进行跟踪。
  • 升级和卸载Charts。

Charts仓库:仅在有分发需求时,才应该将同一应用的Charts文件打包成归档压缩格式提交到特定的charts仓库。仓库既可以运行为公共托guan平台,也可以是用户自建的服务器,仅供特定的组织和个人使用。

二,部署Helm

1,安装Helm Client

安装Helm client方式有两种:预编译的二进制程序和源码编译安装。本文采用预编译的二进制程序安装方式。
1)下载二进制包,并安装:
二进制安装包下载地址:https://github.com/helm/helm/releases ,可以选择不同的版本,例如安装2.14.3版本:

[root@master helm]# wget https://get.helm.sh/helm-v2.14.3-linux-amd64.tar.gz
[root@master helm]# tar zxf helm-v2.14.3-linux-amd64.tar.gz 
[root@master helm]# ls linux-amd64/
helm  LICENSE  README.md  tiller
#将其二进制命令(helm)复制或移动到系统PATH环境变量指向的目录中
[root@master helm]# cp linux-amd64/helm  /usr/local/bin/
#查看helm版本
[root@master helm]# helm version
Client: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}
Error: could not find tiller
//执行helm version命令发现helm客户端版本为v2.14.3,提示服务端tiller还未安装。

2)命令补全
Helm 有很多子命令和参数,为了提高使用命令行的效率,通常建议安装 helm 的 bash 命令补全脚本,方法如下:

[root@master helm]# echo "source <(helm completion bash)" >> /root/.bashrc 
[root@master helm]# source /root/.bashrc 
#现在就可以通过 Tab 键补全 helm 子命令和参数了:
[root@master helm]# helm 
completion  dependency  history     inspect     list        repo        search      template    verify
create      fetch       home        install     package     reset       serve       test        version
delete      get         init        lint        plugin      rollback    status      upgrade   
[root@master helm]# helm  install --
--atomic                      --name=                       --timeout=
--ca-file=                    --namespace=                  --tls
--cert-file=                  --name-template=              --tls-ca-cert=
--debug                       --no-crd-hook                 --tls-cert=
--dep-up                      --no-hooks                    --tls-hostname=
--description=                --password=                   --tls-key=
--devel                       --render-subchart-notes       --tls-verify
--dry-run                     --replace                     --username=
--home=                       --repo=                       --values=
--host=                       --set=                        --verify
--key-file=                   --set-file=                   --version=
--keyring=                    --set-string=                 --wait
--kubeconfig=                 --tiller-connection-timeout=  
--kube-context=               --tiller-namespace=        

2,安装Tiller server

Tiller是helm的服务器端,一般应该运行于k8s集群之上,如果k8s开启了RBAC的授权,那么应该创建相关的ServiceAccount才能进行安装。
1)创建带有cluster-admin角色权限的服务账户

[root@master helm]# vim tiller-rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system
[root@master helm]# kubectl apply -f  tiller-rbac.yaml 
serviceaccount/tiller created
clusterrolebinding.rbac.authorization.k8s.io/tiller created
[root@master helm]# kubectl get serviceaccounts -n kube-system | grep tiller
tiller                               1         78s

2)Tiller server的环境初始化(安装tiller server)
[root@master helm]# helm init --service-account=tiller #service-account指向刚刚创建的服务账户
kubernetes应用包管理工具(Helm)
#查看Tiller server是否成功运行:

[root@master helm]# kubectl get pod -n kube-system | grep tiller
tiller-deploy-8557598fbc-hwzdv   0/1     ErrImagePull   0          2m53s
[root@master helm]# kubectl describe pod -n kube-system tiller-deploy-8557598fbc-hwzdv 

kubernetes应用包管理工具(Helm)

#通过查看详细信息可以看到镜像拉取失败,以为该镜像是谷歌的镜像,所以我们通过阿里云镜像站去下载,通过上面的事件信息中,我们可以看到该Tiller server是运行在node01节点上的,所以我们只需要在node01上下载镜像:

[root@node01 ~]# docker pull registry.aliyuncs.com/google_containers/tiller:v2.14.3
[root@node01 ~]# docker tag registry.aliyuncs.com/google_containers/tiller:v2.14.3 gcr.io/kubernetes-helm/tiller:v2.14.3  #需要重命名为源镜像名
[root@node01 ~]# docker rmi -f registry.aliyuncs.com/google_containers/tiller:v2.14.3 
[root@node01 ~]# docker images | grep tiller
gcr.io/kubernetes-helm/tiller   v2.14.3             2d0a693df3ba        6 months ago        94.2MB

#镜像导入成功后,可以看到tiller server已正常运行:

[root@master helm]# kubectl get pod -n kube-system | grep tiller
tiller-deploy-8557598fbc-hwzdv   1/1     Running   0          17m

#现在, 执行helm version 已经能够查看tiller server的版本信息了:

[root@master helm]# helm version
Client: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}

三,使用Helm

1,Helm的基本操作详解

#helm 安装成功后,可以执行helm repo list查看helm仓库:

[root@master helm]# helm repo list
NAME    URL                                             
stable  https://kubernetes-charts.storage.googleapis.com
local   http://127.0.0.1:8879/charts 
//Helm安装时已经默认配置好了两个仓库:stable和local。stable是官方仓库,local是用户存放自己开发的chart的本地仓库。

#由于官方默认仓库源是国外的,为了方便使用,我们指定为国内的helm仓库源:

[root@master helm]# helm repo add stable https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
"stable" has been added to your repositories
//再次查看可用看到原有仓库源已经被覆盖:
[root@master helm]# helm  repo list 
NAME    URL                                                   
stable  https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
local   http://127.0.0.1:8879/charts  
#更改后,我们执行repo update更新一下仓库:
[root@master helm]# helm  repo update
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "stable" chart repository
Update Complete.

#我们可执行 helm search 查看当前可安装的 chart,也可以某一个服务的版本信息(查看到的是helm charts包的版本):

[root@master helm]# helm search MySQL
NAME                            CHART VERSION   APP VERSION DESCRIPTION                                                 
stable/mysql                    0.3.5                       Fast, reliable, scalable, and easy to use open-source rel...
stable/percona                  0.3.0                       free, fully compatible, enhanced, open source drop-in rep...
stable/percona-xtradb-cluster   0.0.2           5.7.19      free, fully compatible, enhanced, open source drop-in rep...
stable/gcloud-sqlproxy          0.2.3                       Google Cloud SQL Proxy                                      
stable/mariadb                  2.1.6           10.1.31     Fast, reliable, scalable, and easy to use open-source rel...

#例如,通过以下命令来下载mysql的charts包:

[root@master helm]# helm install stable/mysql
#下载过程中,会输出以下信息:
NAME:   mean-spaniel
LAST DEPLOYED: Sat Feb 15 14:43:39 2020
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/PersistentVolumeClaim
NAME                STATUS   VOLUME  CAPACITY  ACCESS MODES  STORAGECLASS  AGE
mean-spaniel-mysql  Pending  0s

==> v1/Pod(related)
NAME                                 READY  STATUS   RESTARTS  AGE
mean-spaniel-mysql-5868455f75-n8lb6  0/1    Pending  0         0s

==> v1/Secret
NAME                TYPE    DATA  AGE
mean-spaniel-mysql  Opaque  2     0s

==> v1/Service
NAME                TYPE       CLUSTER-IP    EXTERNAL-IP  PORT(S)   AGE
mean-spaniel-mysql  ClusterIP  10.102.92.19         3306/TCP  0s

==> v1beta1/Deployment
NAME                READY  UP-TO-DATE  AVAILABLE  AGE
mean-spaniel-mysql  0/1    1           0          0s

NOTES:
MySQL can be accessed via port 3306 on the following DNS name from within your cluster:
mean-spaniel-mysql.default.svc.cluster.local

To get your root password run:

    MYSQL_ROOT_PASSWORD=$(kubectl get secret --namespace default mean-spaniel-mysql -o jsonpath="{.data.mysql-root-password}" | base64 --decode; echo)

To connect to your database:

1. Run an Ubuntu pod that you can use as a client:

    kubectl run -i --tty ubuntu --image=ubuntu:16.04 --restart=Never -- bash -il

2. Install the mysql client:

    $ apt-get update && apt-get install mysql-client -y

3. Connect using the mysql cli, then provide your password:
    $ mysql -h mean-spaniel-mysql -p

To connect to your database directly from outside the K8s cluster:
    MYSQL_HOST=127.0.0.1
    MYSQL_PORT=3306

    # Execute the following commands to route the connection:
    export POD_NAME=$(kubectl get pods --namespace default -l "app=mean-spaniel-mysql" -o jsonpath="{.items[0].metadata.name}")
    kubectl port-forward $POD_NAME 3306:3306

    mysql -h ${MYSQL_HOST} -P${MYSQL_PORT} -u root -p${MYSQL_ROOT_PASSWORD}

输出信息分为三个部分:
(1)chart本次部署的描述信息:
NAME 是 release的名字,因为我们没用-n 参数指定,heml随机生成了一个,这里是mean-spaniel。
NAMESPACE 是 release 部署的namespace,默认是default,也可以通过--namespace 指定。
STATUS 为DEPLOYED,表示已经将chart部署到集群。

(2)当前 release包含的资源(RESOURCES):
Service,Deployment,Secret和PersistentVolumeClaim,其名字都是
mean-spaniel-mysql,命名的格式为“ReleaseName-ChartName”。

(3)NOTES 部分显示的是 release的使用方式。比如如何访问Service,如何获取数据库密码,以及如何连接数据库等。

#执行以下命令,查看已部署的release:

[root@master helm]# helm list 
NAME            REVISION    UPDATED                     STATUS      CHART       APP VERSION NAMESPACE
mean-spaniel    1           Sat Feb 15 14:43:39 2020    DEPLOYED    mysql-0.3.5             default  

#通过以下命令,查看release的状态:

[root@master helm]# helm status mean-spaniel
部分内容如下:
LAST DEPLOYED: Sat Feb 15 14:43:39 2020
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/PersistentVolumeClaim
NAME                STATUS   VOLUME  CAPACITY  ACCESS MODES  STORAGECLASS  AGE
mean-spaniel-mysql  Pending  26m

==> v1/Pod(related)
NAME                                 READY  STATUS   RESTARTS  AGE
mean-spaniel-mysql-5868455f75-n8lb6  0/1    Pending  0         26m

==> v1/Secret
NAME                TYPE    DATA  AGE
mean-spaniel-mysql  Opaque  2     26m

==> v1/Service
NAME                TYPE       CLUSTER-IP    EXTERNAL-IP  PORT(S)   AGE
mean-spaniel-mysql  ClusterIP  10.102.92.19         3306/TCP  26m

==> v1beta1/Deployment
NAME                READY  UP-TO-DATE  AVAILABLE  AGE
mean-spaniel-mysql  0/1    1           0          26m

#在生产环境中,我们也可以使用kubectl get 和kubectl describe来查看实例的各个对象,以快速的进行排错。例如查看当前pod:

[root@master helm]# kubectl get pod mean-spaniel-mysql-5868455f75-n8lb6 
NAME                                  READY   STATUS    RESTARTS   AGE
mean-spaniel-mysql-5868455f75-n8lb6   0/1     Pending   0          31m
[root@master helm]# kubectl describe pod mean-spaniel-mysql-5868455f75-n8lb6 

kubernetes应用包管理工具(Helm)
通过pod的事件信息中,得知,因为我们还没有准备pv,所以当前实例还不可用。

#如果想要删除已部署的release,可执行helm delete 命令(注意:必须加上--purge删除缓存,才能够彻底的删除:

[root@master helm]# helm delete mean-spaniel --purge
release "mean-spaniel" deleted

2,chart 目录结构

我们知道Charts是Helm使用的kubernetes程序包打包格式,一个charts就是一个描述一组kubernetes资源的文件的集合。

一个单独的charts既能部署简单应用,例如一个memcached服务,也能部署复杂的应用,比如包含HTTP Servers,Database,消息中间件,cache等。

chart 将这些文件放置在预定义的目录结构中,通常整个chart被打包成tar包,而且标注上版本信息,便于Helm部署。下面我们将详细讨论chart的目录结构以及包含的各类文件。

#例如,之前安装的mysql chart,一旦安装了某个chart,我们就可以在
~/.helm/cache/archive 中找到 chart 的 tar 包。

[root@master helm]# ls ~/.helm/cache/archive/
mysql-0.3.5.tgz

#解压后,mysql chart 目录结构如下:

[root@master helm]# tree -C mysql/
mysql/
├── Chart.yaml
├── README.md
├── templates
│   ├── configmap.yaml
│   ├── deployment.yaml
│   ├── _helpers.tpl
│   ├── NOTES.txt
│   ├── pvc.yaml
│   ├── secrets.yaml
│   └── svc.yaml
└── values.yaml

1 directory, 10 files

包含如下内容:
(1)chart.yaml:YAML文件,描述chart的概要信息。

description: Fast, reliable, scalable, and easy to use open-source relational database
  system.
engine: gotpl
home: https://www.mysql.com/
icon: https://www.mysql.com/common/logos/logo-mysql-170x115.png
keywords:
- mysql
- database
- sql
maintainers:
- email: viglesias@google.com
  name: Vic Iglesias
name: mysql
sources:
- https://github.com/kubernetes/charts
- https://github.com/docker-library/mysql
version: 0.3.5

其中,name和version是必填项,其他都是可选的。

(2)README.md:Markdown 格式的README 文件,也就是chart的使用文档,此文件可选。

(3)values.yaml :chart支持在安装的时根据参数进行定制化配置,而values.yaml 则提供了这些配置参数的默认值。

(4)templates 目录 :各类kubernetes资源的配置模板都放置在这里。Helm会将values.yaml 中的参数值注入到模板中生成标准的YAML配置文件。
模板是chart最重要的部分,也是helm最强大地方。模板增加了应用部署的灵活性,能够适用不同的环境。

四,Helm实践

1,Helm部署MySQL

在安装之前,我们可以先执行helm inspect values 查看 mysql chart的使用方法:

[root@master ~]# helm inspect values stable/mysql

输出的实际上是values.yaml的内容。阅读注释就可以知道mysql chart支持哪些参数,安装之前需要做哪些准备,其中有一部分是关于存储的:

## Persist data to a persistent volume
persistence:
  enabled: true
  ## database data Persistent Volume Storage Class
  ## If defined, storageClassName: 
  ## If set to "-", storageClassName: "", which disables dynamic provisioning
  ## If undefined (the default) or set to null, no storageClassName spec is
  ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
  ##   GKE, AWS & OpenStack)
  ##
  # storageClass: "-"
  accessMode: ReadWriteOnce
  size: 8Gi

chart定义了一个pvc,申请8G的pv,因为是测试环境,所我们得预先创建好相应的pv。

1)创建pv:
//首先搭建nfs(master 为nfs服务器):

[root@master helm]# yum -y install nfs-utils
[root@master helm]# vim /etc/exports
/nfsdata/mysql *(rw,sync,no_root_squash)
[root@master helm]# mkdir -p /nfsdata/mysql
[root@master helm]# systemctl start rpcbind
[root@master helm]# systemctl start nfs-server
[root@master helm]# systemctl enable nfs-server
[root@master mysql]# showmount -e
Export list for master:
/nfsdata/mysql *

//创建mysql-pv,配置内容如下:


apiVersion: v1
kind: PersistentVolume
metadata:
  name: mysql-pv
spec:
  accessModes:
    - ReadWriteOnce
  capacity:
    storage: 8Gi
  persistentVolumeReclaimPolicy: Retain
  nfs:
    path: /nfsdata/mysql
    server: 172.16.1.30
[root@master ~]# kubectl apply -f  mysql-pv.yaml 
persistentvolume/mysql-pv created
#确保pv能够正常使用:
[root@master helm]# kubectl get pv
NAME       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
mysql-pv   8Gi        RWO            Retain           Available  

2)安装mysql chart
//下载mysql (设置mysql root用户的密码,并且指定release的名称)

#可以通过--set直接传入参数值:
[root@master helm]# helm install stable/mysql --set mysqlRootPassword=123.com -n test-mysql

//查看已安装的release:

[root@master helm]# helm list
NAME        REVISION    UPDATED                     STATUS      CHART       APP VERSION NAMESPACE
test-mysql  1           Sun Feb 16 12:39:57 2020    DEPLOYED    mysql-0.3.5             default  
#查看release的状态:
[root@master helm]# helm status test-mysql
LAST DEPLOYED: Mon Feb 17 11:51:38 2020
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/PersistentVolumeClaim
NAME              STATUS  VOLUME    CAPACITY  ACCESS MODES  STORAGECLASS  AGE
test-mysql-mysql  Bound   mysql-pv  8Gi       RWO           23m

==> v1/Pod(related)
NAME                              READY  STATUS   RESTARTS  AGE
test-mysql-mysql-dfb9b6944-f6pgs  1/1    Running  0         23m

==> v1/Secret
NAME              TYPE    DATA  AGE
test-mysql-mysql  Opaque  2     23m

==> v1/Service
NAME              TYPE       CLUSTER-IP     EXTERNAL-IP  PORT(S)   AGE
test-mysql-mysql  ClusterIP  10.103.220.95         3306/TCP  23m

==> v1beta1/Deployment
NAME              READY  UP-TO-DATE  AVAILABLE  AGE
test-mysql-mysql  1/1    1           1          23m

NOTES:
MySQL can be accessed via port 3306 on the following DNS name from within your cluster:
test-mysql-mysql.default.svc.cluster.local

To get your root password run:

    MYSQL_ROOT_PASSWORD=$(kubectl get secret --namespace default test-mysql-mysql -o jsonpath="{.data.mysql-root-password}" | base64 --decode; echo)

To connect to your database:

1. Run an Ubuntu pod that you can use as a client:

    kubectl run -i --tty ubuntu --image=ubuntu:16.04 --restart=Never -- bash -il

2. Install the mysql client:

    $ apt-get update && apt-get install mysql-client -y

3. Connect using the mysql cli, then provide your password:
    $ mysql -h test-mysql-mysql -p

To connect to your database directly from outside the K8s cluster:
    MYSQL_HOST=127.0.0.1
    MYSQL_PORT=3306

    # Execute the following commands to route the connection:
    export POD_NAME=$(kubectl get pods --namespace default -l "app=test-mysql-mysql" -o jsonpath="{.items[0].metadata.name}")
    kubectl port-forward $POD_NAME 3306:3306

    mysql -h ${MYSQL_HOST} -P${MYSQL_PORT} -u root -p${MYSQL_ROOT_PASSWORD}

可以看到pv的状态为Bound,并且pod已正常运行。

注意:如果pod没有正常运行,可以查看pv是否绑定成功(状态确保为Available),如果pv没有问题的话,那就是镜像还没有拉取成功(因为mysql镜像比较大,所以花费时间较长。)

3)测试登录mysql
#注意:如果我们在不知道mysql root用户密码的情况下,可以通过以下方式进行获取:(其实在执行helm status命令输出的信息中,已经告诉我们了mysql的各种事项)

[root@master helm]# helm status test-mysql
#内容在NOTES部分:
NOTES:
MySQL can be accessed via port 3306 on the following DNS name from within your cluster:
test-mysql-mysql.default.svc.cluster.local

To get your root password run:

    MYSQL_ROOT_PASSWORD=$(kubectl get secret --namespace default test-mysql-mysql -o jsonpath="{.data.mysql-root-password}" | base64 --decode; echo)

To connect to your database:

1. Run an Ubuntu pod that you can use as a client:

    kubectl run -i --tty ubuntu --image=ubuntu:16.04 --restart=Never -- bash -il

2. Install the mysql client:

    $ apt-get update && apt-get install mysql-client -y

3. Connect using the mysql cli, then provide your password:
    $ mysql -h test-mysql-mysql -p

To connect to your database directly from outside the K8s cluster:
    MYSQL_HOST=127.0.0.1
    MYSQL_PORT=3306

    # Execute the following commands to route the connection:
    export POD_NAME=$(kubectl get pods --namespace default -l "app=test-mysql-mysql" -o jsonpath="{.items[0].metadata.name}")
    kubectl port-forward $POD_NAME 3306:3306

    mysql -h ${MYSQL_HOST} -P${MYSQL_PORT} -u root -p${MYSQL_ROOT_PASSWORD}
#执行”To get your root password run:“中告诉我们的内容:
[root@master helm]# kubectl get secret --namespace default test-mysql-mysql -o jsonpath="{.data.mysql-root-password}" | base64 --decode; echo
123.com    #得到mysql root密码为123.com
//有了密码,测试登陆mysql数据库:
[root@master helm]# kubectl exec -it test-mysql-mysql-dfb9b6944-f6pgs -- mysql -uroot -p123.com
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 222
Server version: 5.7.14 MySQL Community Server (GPL)

Copyright (c) 2000, 2016, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> \s   
--------------
mysql  Ver 14.14 Distrib 5.7.14, for Linux (x86_64) using  EditLine wrapper

Connection id:      222
Current database:   
Current user:       root@localhost
SSL:            Not in use
Current pager:      stdout
Using outfile:      ''
Using delimiter:    ;
Server version:     5.7.14 MySQL Community Server (GPL)
Protocol version:   10
Connection:     Localhost via UNIX socket
Server characterset:    latin1
Db     characterset:    latin1
Client characterset:    latin1
Conn.  characterset:    latin1
UNIX socket:        /var/run/mysqld/mysqld.sock
Uptime:         20 min 4 sec

Threads: 1  Questions: 486  Slow queries: 0  Opens: 109  Flush tables: 1  Open tables: 102  Queries per second avg: 0.403
--------------

2,Helm升级与回滚服务

1)升级操作:
#就以上面部署的mysql为例,进行版本升级:

//查看当前mysql版本:
[root@master helm]# kubectl get deployments. -o wide test-mysql-mysql 
NAME               READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS         IMAGES         SELECTOR
test-mysql-mysql   1/1     1            1           63m   test-mysql-mysql   mysql:5.7.14   app=test-mysql-mysql
#比如,将当前mysql版本升级为5.7.15版本:
[root@master helm]# helm upgrade --set imageTag=5.7.15 test-mysql stable/mysql   #通过--set参数进行指定,后面跟上release名称和release即可
#等待一些时间(将重新拉取新的镜像,并生成新的pod),升级成功:
[root@master helm]# kubectl get deployments. test-mysql-mysql -o wide
NAME               READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS         IMAGES         SELECTOR
test-mysql-mysql   1/1     1            1           55m   test-mysql-mysql   mysql:5.7.15   app=test-mysql-mysql
//可以通过helm list查看当前release的version:
[root@master helm]# helm list  #当前版本为2版本
NAME        REVISION    UPDATED                     STATUS      CHART       APP VERSION NAMESPACE
test-mysql  2           Mon Feb 17 12:38:24 2020    DEPLOYED    mysql-0.3.5             default  

2)回滚操作:
通过helm history 可以查看 release 所有的版本:

[root@master helm]# helm history test-mysql
REVISION    UPDATED                     STATUS      CHART       DESCRIPTION     
1           Mon Feb 17 11:51:38 2020    SUPERSEDED  mysql-0.3.5 Install complete
2           Mon Feb 17 12:38:24 2020    DEPLOYED    mysql-0.3.5 Upgrade complete

#比如,当前执行helm rollback将mysql回滚到版本1:

[root@master helm]# helm rollback test-mysql 1
Rollback was a success.

#查看版本是否回滚成功:

[root@master helm]# kubectl get deployments. -o wide test-mysql-mysql 
NAME               READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS         IMAGES         SELECTOR
test-mysql-mysql   1/1     1            1           63m   test-mysql-mysql   mysql:5.7.14   app=test-mysql-mysql
//可以看到版本回滚为5.7.14版本

#再次查看,发现当前release revision的值为3(表示为第三次的一个修订版)

[root@master helm]# helm list 
NAME        REVISION    UPDATED                     STATUS      CHART       APP VERSION NAMESPACE
test-mysql  3           Mon Feb 17 12:54:00 2020    DEPLOYED    mysql-0.3.5             default  

3,Helm+StroagClass

在实践部署mysql的过程中,手动创建pv是非常的不方便的,在生产环境中,有很多的应用需要实现部署,所以我们可以通过StorageClass来为我们提供pv。关于SC的详细内容,参考博文k8s之StorageClass

1)部署nfs server:

[root@master ~]# yum -y install nfs-utils
[root@master ~]# vim /etc/exports
/nfsdata/SC *(rw,sync,no_root_squash)
[root@master ~]# mkdir -p /nfsdata/SC
[root@master ~]# systemctl restart rpcbind
[root@master ~]# systemctl restart nfs-server
[root@master ~]# showmount -e 172.16.1.30
Export list for 172.16.1.30:
/nfsdata/SC *

2)创建rbac权限:

[root@master helm]# vim rbac-rolebind.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-provisioner
  namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: nfs-provisioner-runner
  namespace: default
rules:
   -  apiGroups: [""]
      resources: ["persistentvolumes"]
      verbs: ["get", "list", "watch", "create", "delete"]
   -  apiGroups: [""]
      resources: ["persistentvolumeclaims"]
      verbs: ["get", "list", "watch", "update"]
   -  apiGroups: ["storage.k8s.io"]
      resources: ["storageclasses"]
      verbs: ["get", "list", "watch"]
   -  apiGroups: [""]
      resources: ["events"]
      verbs: ["watch", "create", "update", "patch"]
   -  apiGroups: [""]
      resources: ["services", "endpoints"]
      verbs: ["get","create","list", "watch","update"]
   -  apiGroups: ["extensions"]
      resources: ["podsecuritypolicies"]
      resourceNames: ["nfs-provisioner"]
      verbs: ["use"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-provisioner
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
[root@master helm]# kubectl apply -f  rbac-rolebind.yaml 
serviceaccount/nfs-provisioner created
clusterrole.rbac.authorization.k8s.io/nfs-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/run-nfs-provisioner created

3)创建nfs的Deployment:

[root@master helm]# vim nfs-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  namespace: default
spec:
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccount: nfs-provisioner
      containers:
        - name: nfs-client-provisioner
          image: registry.cn-hangzhou.aliyuncs.com/open-ali/nfs-client-provisioner
          volumeMounts:
            - name: nfs-client-root
              mountPath:  /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: nfs-deploy    
            - name: NFS_SERVER
              value: 172.16.1.30     
            - name: NFS_PATH
              value: /nfsdata/SC
      volumes:  
        - name: nfs-client-root
          nfs:
            server: 172.16.1.30
            path: /nfsdata/SC
//导入nfs-client-provisioner镜像(集群中的每个节点都需导入,包括master)
[root@master helm]# docker load --input nfs-client-provisioner.tar 
[root@master helm]# kubectl apply -f  nfs-deployment.yaml 
deployment.extensions/nfs-client-provisioner created
//确保pod正常运行:
[root@master helm]# kubectl get pod nfs-client-provisioner-958547f7d-95jkg 
NAME                                     READY   STATUS    RESTARTS   AGE
nfs-client-provisioner-958547f7d-95jkg   1/1     Running   0          42s

4)创建stroage class:

[root@master sc]# vim test-sc.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: statefu-nfs
  namespace: default
provisioner: nfs-deploy  
reclaimPolicy: Retain
[root@master helm]# kubectl apply -f  test-sc.yaml 
storageclass.storage.k8s.io/statefu-nfs created
[root@master helm]# kubectl get sc
NAME          PROVISIONER   AGE
statefu-nfs   nfs-deploy    3m1s

5)为release申请pv
通过修改release chart目录下的values.yaml文件,values文件可以通过解压release chart包获得:

[root@master helm]# tar zxf  ~/.helm/cache/archive/mysql-0.3.5.tgz   #例如部署mysql
[root@master helm]# cd mysql/
[root@master mysql]# ls
Chart.yaml  README.md  templates  values.yaml
[root@master mysql]# vim values.yaml 
#修改内容如下:

kubernetes应用包管理工具(Helm)

6)下载mysql chart

#注意,下载方式为通过chart本地目录进行安装(后面会讲到):
[root@master helm]# helm install mysql/ -n new-mysql   

#查看release 状态:

[root@master helm]# helm status new-mysql
部分信息如下:
LAST DEPLOYED: Mon Feb 17 13:38:09 2020
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/PersistentVolumeClaim
NAME             STATUS  VOLUME                                    CAPACITY  ACCESS MODES  STORAGECLASS  AGE
new-mysql-mysql  Bound   pvc-6a4686cc-fb67-4577-8c6d-848a0ae800b5  5Gi       RWO           statefu-nfs   41s

==> v1/Pod(related)
NAME                              READY  STATUS   RESTARTS  AGE
new-mysql-mysql-6cf95546fb-fqg54  1/1    Running  0         41s

==> v1/Secret
NAME             TYPE    DATA  AGE
new-mysql-mysql  Opaque  2     41s

==> v1/Service
NAME             TYPE       CLUSTER-IP      EXTERNAL-IP  PORT(S)   AGE
new-mysql-mysql  ClusterIP  10.108.202.123         3306/TCP  41s

==> v1beta1/Deployment
NAME             READY  UP-TO-DATE  AVAILABLE  AGE
new-mysql-mysql  1/1    1           1          41s

可以看到pvc,pod,service,deployment资源已正常运行,且看到pvc是通过向stroageclass去获取的(状态已为Bound)。

4,自定义chart

kubernetes 给我们提供了大量官方chart,不过要部署微服务应用,还是需要开发自己的chart。但它仅能用于本地访问,当然,用户也可以通过 helm package命令将其打包为tar格式后分享给团队或者社区。

在创建自定义chart之前,我们先来了解helm的几种安装方法,Helm支持4种安装方法:

  • 安装仓库中的 chart,例如:helm install stable/nginx

  • 通过 tar 包安装,例如:helm install ./nginx-1.2.3.tgz

  • 通过 chart 本地目录安装,例如:helm install ./nginx

  • 通过 URL 安装,例如:helm install https://example.com/charts/nginx-1.2.3.tgz

1)创建自定义的chart

[root@master ~]# helm create mychart
Creating mychart
[root@master ~]# tree mychart/
mychart/
├── charts
├── Chart.yaml
├── templates
│   ├── deployment.yaml
│   ├── _helpers.tpl
│   ├── ingress.yaml
│   ├── NOTES.txt
│   ├── service.yaml
│   └── tests
│       └── test-connection.yaml
└── values.yaml

3 directories, 8 files

Helm 会帮助我们创建目录(mychart),并生成各类chart文件,这样我们就可以在此基础上开发自己的chart。

2)使用自己开发的chart,简单部署nginx服务
当我们创建完chart后,查看默认生成的values.yaml文件:

[root@master ~]# cat mychart/values.yaml 
# Default values for mychart.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

replicaCount: 1

image:
  repository: nginx
  tag: stable
  pullPolicy: IfNotPresent

imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""

service:
  type: ClusterIP
  port: 80

ingress:
  enabled: false
  annotations: {}
    # kubernetes.io/ingress.class: nginx
    # kubernetes.io/tls-acme: "true"
  hosts:
    - host: chart-example.local
      paths: []

  tls: []
  #  - secretName: chart-example-tls
  #    hosts:
  #      - chart-example.local

resources: {}
  # We usually recommend not to specify default resources and to leave this as a conscious
  # choice for the user. This also increases chances charts run on environments with little
  # resources, such as Minikube. If you do want to specify resources, uncomment the following
  # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
  # limits:
  #   cpu: 100m
  #   memory: 128Mi
  # requests:
  #   cpu: 100m
  #   memory: 128Mi

nodeSelector: {}

tolerations: []

affinity: {}

可以看到部署镜像默认是nginx,但是其标签(tag)为测试版本(stable),所以我们无法直接安装release。

#直接修改values文件(修改tag为可使用的版本):
[root@master ~]# vim  mychart/values.yaml 

kubernetes应用包管理工具(Helm)

#安装release:
[root@master ~]# helm install mychart/ -n mynginx
#查看release状态:
[root@master ~]# helm status mynginx
LAST DEPLOYED: Mon Feb 17 15:34:10 2020
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/Deployment
NAME             READY  UP-TO-DATE  AVAILABLE  AGE
mynginx-mychart  1/1    1           1          10m

==> v1/Pod(related)
NAME                             READY  STATUS   RESTARTS  AGE
mynginx-mychart-bf987cd5d-vp9qp  1/1    Running  0         10m

==> v1/Service
NAME             TYPE       CLUSTER-IP    EXTERNAL-IP  PORT(S)  AGE
mynginx-mychart  ClusterIP  10.96.34.246         80/TCP   10m

NOTES:
1. Get the application URL by running these commands:
  export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=mychart,app.kubernetes.io/instance=mynginx" -o jsonpath="{.items[0].metadata.name}")
  echo "Visit http://127.0.0.1:8080 to use your application"
  kubectl port-forward $POD_NAME 8080:80
#测试访问nginx:
[root@master ~]# curl -I 10.96.34.246
HTTP/1.1 200 OK            #nignx成功访问
Server: nginx/1.17.3
Date: Mon, 17 Feb 2020 07:45:39 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 13 Aug 2019 08:50:00 GMT
Connection: keep-alive
ETag: "5d5279b8-264"
Accept-Ranges: bytes

#上面我们使用的是ClusterIP访问的nginx,如果外部应用需要访问内部服务,怎么办?所以我们可以以NodePort的方式将服务端口映射出去。

注意:我们并不能在values文件中直接添加,需要先在自定义chart的templates目录下的service.yaml文件进行添加变量,操作如下:

[root@master ~]# vim mychart/templates/service.yaml
kubernetes应用包管理工具(Helm)
service.yaml文件是以json语言编写的,所以我们进行修改时,需要按照其格式进行修改。

#在service文件中添加了nodeport的类型,接下来修改其values文件:
[root@master ~]# vim mychart/values.yaml
kubernetes应用包管理工具(Helm)

#修改完成后,重新部署nginx:

[root@master ~]# helm delete mynginx --purge  #将原来的release删除
release "mynginx" deleted
[root@master ~]# helm install mychart/ -n mynginx  #重新安装
#查看release状态:
[root@master ~]# helm status mynginx
LAST DEPLOYED: Mon Feb 17 16:02:04 2020
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/Deployment
NAME             READY  UP-TO-DATE  AVAILABLE  AGE
mynginx-mychart  1/1    1           1          16s

==> v1/Pod(related)
NAME                             READY  STATUS   RESTARTS  AGE
mynginx-mychart-bf987cd5d-xdm2d  1/1    Running  0         16s

==> v1/Service
NAME             TYPE      CLUSTER-IP    EXTERNAL-IP  PORT(S)       AGE
mynginx-mychart  NodePort  10.100.31.89         80:32134/TCP  16s

#外部通过nodeport方式访问nginx:
kubernetes应用包管理工具(Helm)

5,调试chart

只要是程序,就会有bug,chart也不例外。Helm提供了debug的工具:helm lint和helm install --dry-run --debug 。

1)helm lint工具:
helm lint 会检测chart的语法,报告错误以及给出建议。

#比如我们在values.yaml文件中漏掉了一个冒号“:” ,通过 helm lint 进行测试,它会指出这个语法错误。
[root@master ~]# helm lint mychart/
==> Linting mychart/
[INFO] Chart.yaml: icon is recommended
[ERROR] values.yaml: unable to parse YAML
    error converting YAML to JSON: yaml: line 8: could not find expected ':'

Error: 1 chart(s) linted, 1 chart(s) failed

一般在编写完values文件后,可以先利用helm lint工具检查是否有bug。

2)helm install --dry-run --debug测试:
helm install --dry-run --debug 会模拟安装chart,并输出每个模板生成的YAML内容。

[root@master ~]# helm install --dry-run mychart/ --debug 
[debug] Created tunnel using local port: '43350'

[debug] SERVER: "127.0.0.1:43350"

[debug] Original chart version: ""
[debug] CHART PATH: /root/mychart

NAME:   exacerbated-grizzly
REVISION: 1
RELEASED: Mon Feb 17 16:18:48 2020
CHART: mychart-0.1.0
USER-SUPPLIED VALUES:
{}

COMPUTED VALUES:
affinity: {}
fullnameOverride: ""
image:
  pullPolicy: IfNotPresent
  repository: nginx
  tag: latest
imagePullSecrets: []
ingress:
  annotations: {}
  enabled: false
  hosts:
  - host: chart-example.local
    paths: []
  tls: []
nameOverride: ""
nodeSelector: {}
replicaCount: 1
resources: {}
service:
  nodePort: 32134
  port: 80
  type: NodePort
tolerations: []

HOOKS:
---
# exacerbated-grizzly-mychart-test-connection
apiVersion: v1
kind: Pod
metadata:
  name: "exacerbated-grizzly-mychart-test-connection"
  labels:
    app.kubernetes.io/name: mychart
    helm.sh/chart: mychart-0.1.0
    app.kubernetes.io/instance: exacerbated-grizzly
    app.kubernetes.io/version: "1.0"
    app.kubernetes.io/managed-by: Tiller
  annotations:
    "helm.sh/hook": test-success
spec:
  containers:
    - name: wget
      image: busybox
      command: ['wget']
      args:  ['exacerbated-grizzly-mychart:80']
  restartPolicy: Never
MANIFEST:

---
# Source: mychart/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: exacerbated-grizzly-mychart
  labels:
    app.kubernetes.io/name: mychart
    helm.sh/chart: mychart-0.1.0
    app.kubernetes.io/instance: exacerbated-grizzly
    app.kubernetes.io/version: "1.0"
    app.kubernetes.io/managed-by: Tiller
spec:
  type: NodePort
  ports:
    - port: 80
      targetPort: http
      nodePort: 32134
      protocol: TCP
      name: http
  selector:
    app.kubernetes.io/name: mychart
    app.kubernetes.io/instance: exacerbated-grizzly
---
# Source: mychart/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: exacerbated-grizzly-mychart
  labels:
    app.kubernetes.io/name: mychart
    helm.sh/chart: mychart-0.1.0
    app.kubernetes.io/instance: exacerbated-grizzly
    app.kubernetes.io/version: "1.0"
    app.kubernetes.io/managed-by: Tiller
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: mychart
      app.kubernetes.io/instance: exacerbated-grizzly
  template:
    metadata:
      labels:
        app.kubernetes.io/name: mychart
        app.kubernetes.io/instance: exacerbated-grizzly
    spec:
      containers:
        - name: mychart
          image: "nginx:latest"
          imagePullPolicy: IfNotPresent
          ports:
            - name: http
              containerPort: 80
              protocol: TCP
          livenessProbe:
            httpGet:
              path: /
              port: http
          readinessProbe:
            httpGet:
              path: /
              port: http
          resources:
            {}

我们可以检视这些输出, 判断是否与预期相符。

6, 将chart添加到仓库

chart通过测试后可以将其添加到仓库,团队其他成员就能够方便使用。任何HTTP Server度可以作为chart仓库,下面将在集群中node01节点节点上搭建仓库。

1)在node01上运行一个httpd容器:(提供web服务)

[root@node01 ~]# docker run -d -p 8080:80 -v /var/www/:/usr/local/apache2/htdocs httpd
a2fb5f89dd3fd3f729139e41a105498a60d0bee02c73ad8706636007390eaa55

2)回到master,通过helm package 将mychart打包:

[root@master ~]# helm package mychart/
Successfully packaged chart and saved it to: /root/mychart-0.1.0.tgz

3)执行helm repo index 生成仓库的index文件:

[root@master ~]# mkdir myrepo
[root@master ~]# mv mychart-0.1.0.tgz myrepo/
[root@master ~]# helm repo index myrepo/ --url http://172.16.1.31:8080/charts   #该地址为chart仓库地址(node01)
[root@master ~]# ls myrepo/
index.yaml  mychart-0.1.0.tgz

helm会扫描 myrepo目录中的所有tgz包,并生成index.yaml文件。--url指定的是新chart仓库的访问路径。新生成的index.yaml 记录了当前仓库中所有 chart 的信息:

[root@master ~]# cat myrepo/index.yaml 
apiVersion: v1
entries:
  mychart:
  - apiVersion: v1
    appVersion: "1.0"
    created: "2020-02-17T16:34:25.239190623+08:00"
    description: A Helm chart for Kubernetes
    digest: 367436d83e973f89e4bac162837fb4e9579cf3176b2506a7ed6617a182f11031
    name: mychart
    urls:
    - http://172.16.1.31:8080/charts/mychart-0.1.0.tgz
    version: 0.1.0
generated: "2020-02-17T16:34:25.238618624+08:00"
#可以看到当前只有mychart这一个chart。

4)将 mychart-0.1.0.tgz 和 index.yaml 上传到node1 的 /var/www/charts 目录。

#在node01上创建目录:
[root@node01 ~]# mkdir /var/www/charts
#将文件拷贝给node01:
[root@master ~]# scp myrepo/index.yaml  myrepo/mychart-0.1.0.tgz  node01:/var/www/charts
index.yaml                                                                         100%  400     0.4KB/s   00:00    
mychart-0.1.0.tgz                                                                  100% 2842     2.8KB/s   00:00  

5)通过helm repo add 将新仓库添加到Helm:

[root@master ~]# helm repo add myrepo http://172.16.1.31:8080/charts
"myrepo" has been added to your repositories
[root@master ~]# helm repo list 
NAME    URL                                                   
stable  https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
local   http://127.0.0.1:8879/charts                          
myrepo  http://172.16.1.31:8080/charts   
仓库命名为myrepo,Helm会从仓库下载index.yaml。

#现在用户就可以repo search 到mychart了:

[root@master ~]# helm search mychart
NAME            CHART VERSION   APP VERSION DESCRIPTION                
local/mychart   0.1.0           1.0         A Helm chart for Kubernetes
myrepo/mychart  0.1.0           1.0         A Helm chart for Kubernetes

除了自己上传的仓库,这还有一个local/mychart。这是因为在执行第 2 步打包操作的同时,mychart 也被同步到了 local 的仓库。

#从新仓库中安装mychart:
[root@master ~]# helm install myrepo/mychart -n new-nginx
#查看release的状态:
[root@master ~]# helm status  new-nginx   #pod正常运行
LAST DEPLOYED: Mon Feb 17 16:56:54 2020
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/Deployment
NAME               READY  UP-TO-DATE  AVAILABLE  AGE
new-nginx-mychart  1/1    1           1          55s

==> v1/Pod(related)
NAME                                READY  STATUS   RESTARTS  AGE
new-nginx-mychart-66d6bbb795-fsgml  1/1    Running  0         55s

==> v1/Service
NAME               TYPE      CLUSTER-IP   EXTERNAL-IP  PORT(S)       AGE
new-nginx-mychart  NodePort  10.106.51.8         80:32134/TCP  55s

NOTES:
1. Get the application URL by running these commands:
  export NODE_PORT=$(kubectl get --namespace default -o jsonpath="{.spec.ports[0].nodePort}" services new-nginx-mychart)
  export NODE_IP=$(kubectl get nodes --namespace default -o jsonpath="{.items[0].status.addresses[0].address}")
  echo http://$NODE_IP:$NODE_PORT

如果以后仓库添加了新的chart,需要用helm repo update命令更新本地的index。

[root@master ~]# helm repo update
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "myrepo" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete.

当前标题:kubernetes应用包管理工具(Helm)
网站路径:http://azwzsj.com/article/ihschp.html