看看吧

glusterfs数据卷基本操作

一、如何查看Pod使用哪个pvc

查看pod的yaml,找到volume的定义部分,看pvc的名字

kubectl get pod/elasticsearch-0 -o yaml -n efk

输出如下

...
    volumeMounts:
    - mountPath: /usr/share/elasticsearch/data
      name: elasticsearch
    - mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
      name: efkelasticsearchv300
      readOnly: true
      subPath: elasticsearch.yml
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-8pqd4
      readOnly: true
...
  volumes:
  - name: elasticsearch
    persistentVolumeClaim:
      claimName: elasticsearch-elasticsearch-0
...

可以看到使用的pvc名字就是elasticsearch-elasticsearch-0

二、如何查看pvc对应的pv

查看pvc:elasticsearch-elasticsearch-0

[root@kube-master-1 ~]$ kubectl get pvc/elasticsearch-elasticsearch-0 -n efk
NAME                            STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                      AGE
elasticsearch-elasticsearch-0   Bound    pvc-32ec397e-722b-11ea-983d-525400c29763   30Gi       RWO            glusterfs-20200316200453-a4cf6f   4h8m

所以pv的名字就是pvc-32ec397e-722b-11ea-983d-525400c29763

三、查看pv对应的glusterfs volume名字

查看pv的yaml,找到path部分

[root@kube-master-1 ~]$ kubectl get pv/pvc-32ec397e-722b-11ea-983d-525400c29763 -o yaml | grep path:
    path: vol_00e9a96e51d2b6a911313e5b471360ce

所以该pv对应的glusterfs volume名字为vol_00e9a96e51d2b6a911313e5b471360ce

四、查看某个数据卷的状态

使用gluster volume status vol_name命令

[root@kube-master-1 ~]$ kubectl get pv/pvc-32ec397e-722b-11ea-983d-525400c29763 -o yaml | grep path:
    path: vol_00e9a96e51d2b6a911313e5b471360ce
[root@kube-master-1 ~]$ gluster volume status vol_00e9a96e51d2b6a911313e5b471360ce
Status of volume: vol_00e9a96e51d2b6a911313e5b471360ce
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 192.168.21.232:/gluster/volumes/glust
erfs/brick_25b598b829cff6930692731ec5c77c3a
/brick                                      49155     0          Y       1307556
Brick 192.168.21.233:/gluster/volumes/glust
erfs/brick_bae202c07de80c5ac66ebc42cc5651e7
/brick                                      49155     0          Y       2036501
Brick 192.168.21.231:/gluster/volumes/glust
erfs/brick_7fdfd7c4ba26d1b0cc7544b884b55ca6
/brick                                      49155     0          Y       3580151
Self-heal Daemon on localhost               N/A       N/A        Y       3580243
Quota Daemon on localhost                   N/A       N/A        Y       3580496
Self-heal Daemon on 192.168.21.233          N/A       N/A        Y       2036582
Quota Daemon on 192.168.21.233              N/A       N/A        Y       2036672
Self-heal Daemon on 192.168.21.232          N/A       N/A        Y       1307576
Quota Daemon on 192.168.21.232              N/A       N/A        Y       1307705
 
Task Status of Volume vol_00e9a96e51d2b6a911313e5b471360ce
------------------------------------------------------------------------------
There are no active volume tasks

正常情况下,每个 brick 的 Online 状态应该是 Y,如果需要存在 N 的,则是有异常

五、查看gluster节点状态

执行gluster peer status可以看到除了自己之外的其他节点状态,比如

[root@kube-master-1 ~]$ gluster peer 
detach  probe   status  
[root@kube-master-1 ~]$ gluster peer 
detach  probe   status  
[root@kube-master-1 ~]$ gluster peer status
Number of Peers: 2

Hostname: 192.168.21.232
Uuid: d689f3aa-ac14-45b6-b111-b5668abd4c75
State: Peer in Cluster (Connected)

Hostname: 192.168.21.233
Uuid: fa88a450-da1d-431c-a303-91223f469dae
State: Peer in Cluster (Connected)

peer 正常的状态应该是处于 Peer in Cluster

六、查看所有volume列表

执行gluster volume list查看当前集群存在的所有volume

[root@kube-master-1 ~]$ gluster volume list
compass-stack-heketi-back
compass-stack-heketi-main
vol_00e9a96e51d2b6a911313e5b471360ce
vol_364ca8493826e5ca072b449185be28de
vol_8c978e63897ef085f0b3135dbed32857
vol_965fbd0bce77dc5a8b8b9afb9a8c19ca
vol_a79edbce22e8d8842f3a92056da04c2c
vol_c8b4042091369542838b2ee819d530ba
vol_e9d5fea6f8d7b52ab196c239ca8225b1
vol_f2b4b3c08a99d06bead3a8368482c320

七、查看所有volume的状态

gluster volume status可以查看所有volume状态

[root@kube-master-1 ~]$ gluster volume status
Status of volume: compass-stack-heketi-back
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 192.168.21.231:/gluster/heketi/compas
s-stack-heketi-back                         49152     0          Y       2353 
Brick 192.168.21.232:/gluster/heketi/compas
s-stack-heketi-back                         49152     0          Y       1352 
Brick 192.168.21.233:/gluster/heketi/compas
s-stack-heketi-back                         49152     0          Y       1349 
Self-heal Daemon on localhost               N/A       N/A        Y       3580243
Self-heal Daemon on 192.168.21.232          N/A       N/A        Y       1307576
Self-heal Daemon on 192.168.21.233          N/A       N/A        Y       2036582
 
Task Status of Volume compass-stack-heketi-back
------------------------------------------------------------------------------
There are no active volume tasks
 
Status of volume: compass-stack-heketi-main
Gluster process                             TCP Port  RDMA Port  Online  Pid
......

八、启动某个数据卷

使用 gluster volume start vol_name 可以启动某个停止的数据卷;如果数据卷处于异常,可以加上 force 通过 gluster volume start vol_name force 强制启动,比如

# gluster volume start vol_6d9e24328c19610bb652537411089be4 force
volume start: vol_6d9e24328c19610bb652537411089be4: success

九、停止某个数据卷

使用 gluster volume stop vol_name 可以停止某个启动的数据卷;如果数据卷处于异常,可以加上 force 通过 gluster volume stop vol_name force 强制停止,比如

# gluster volume stop vol_6d9e24328c19610bb652537411089be4 force
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: vol_6d9e24328c19610bb652537411089be4: success