- Google Cloud Platform for Architects
- Vitthal Srinivasan Janani Ravi Judy Raj
- 404字
- 2021-06-25 20:48:35
Storage and persistent disks
Recall that while working with Compute Engine instances, we had to choose from many storage options, which included persistent disks that could either be ordinary or SSD, local SSD disks, and Google Cloud Storage. Storage options on Kubernetes engine instances are not all that different, but there is, however, one important subtlety. This has to do with the type of attached disk. Recall that when you use the Compute Engine instance that comes along with an attached disk, the link between a Compute Engine instance and the attached disk will remain for as long as the instance exists and the same disk volume is going to be attached to the same instance until the VM is deleted. This will be the case even if you detach the disk and use it with a different instance.
However, when you are using containers, the on-disk files are ephemeral. If a container restarts for instance, after a crash, whatever data that you have had in your disk files is going to be lost. There is a way around this ephemeral nature of storage option, and that is by using a persistent abstraction known as GCE persistent disks. If you are going to make use of Kubernetes engine instances and want your data to not be ephemeral, but remain associated with your containers, you have got to make use of this abstraction or your disk data will not be persistent after a container restarts.
Dynamically provisioned storage classes use HDD by default but we can customize it and attach an SSD to a user defined storage class. Notice the kind of the file as StorageClass. Here, GCE’s persistent disk is the provisioner with the type SSD.
- You can save it with the name ssd.yaml or something convenient for you:
nano ssd.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ssd
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
- Once it is saved, you can create a PVC (PersistentVolumeClaim). Let’s name it storage-change.yaml. Notice that it has the name of our previously created storage class in the StorageClassName:
nano storage-change.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: storage-change
spec:
accessModes:
- ReadWriteOnce
storageClassName: ssd
resources:
requests:
storage: 1Gi
- Apply the storage change by running the following command. Make sure to run them under the sequence given below since the storage class itself needs to be created first before PVC:
kubectl apply -f ssd.yaml
kubectl apply -f storage-change.yaml
- Python數(shù)據(jù)挖掘:入門、進(jìn)階與實(shí)用案例分析
- SQL Server 2008數(shù)據(jù)庫(kù)應(yīng)用技術(shù)(第二版)
- 使用GitOps實(shí)現(xiàn)Kubernetes的持續(xù)部署:模式、流程及工具
- Access 2007數(shù)據(jù)庫(kù)應(yīng)用上機(jī)指導(dǎo)與練習(xí)
- PySpark大數(shù)據(jù)分析與應(yīng)用
- Neural Network Programming with TensorFlow
- 達(dá)夢(mèng)數(shù)據(jù)庫(kù)性能優(yōu)化
- 基于Apache CXF構(gòu)建SOA應(yīng)用
- 信息學(xué)競(jìng)賽寶典:數(shù)據(jù)結(jié)構(gòu)基礎(chǔ)
- Construct 2 Game Development by Example
- 編寫有效用例
- SQL Server深入詳解
- Visual Studio 2013 and .NET 4.5 Expert Cookbook
- Filecoin原理與實(shí)現(xiàn)
- 大數(shù)據(jù)時(shí)代系列(套裝9冊(cè))