- PostgreSQL High Performance Cookbook
- Chitij Chauhan Dinesh Kumar
- 382字
- 2021-07-09 18:47:19
Storage sizing
In this recipe, we will be discussing how to estimate disk growth using the pgbench tool.
Getting ready
One of the best practices to predict the database disk storage capacity is by loading a set of sample data into the application's database, and simulating production kind of actions using pgbench over a long period. For a period of time (every 1 hour), let's collect the database size using pg_database_size()
or any native command, which returns the disk usage information. Once we get the periodic intervals for at least 24 hours, then we can find an average disk growth ratio by calculating the average of delta among each interval value.
How to do it...
Prepare the SQL script as follows, which simulates the live application behavior in the database:
Create connection; --- Create/Use pool connection. INSERT operation --- Initial write operation. SELECT pg_sleep(0.01); --- Some application code runs here, and waiting for the next query. UPDATE operation --- Update other tables for the newly inserted records. SELECT pg_sleep(0.1); --- Updating other services which shows the live graphs on the updated records. DELETE operation --- Delete or purge any unnecessary data. SELECT pg_sleep(0.01); --- Some application code overhead.
Let's run the following pgbench
test case, with the preceding test file for 24 hours:
$ pgbench -T 86400 -f <script location> -c <number of concurrent connections>
In parallel, let's schedule a job that collects the database size every hour using the pg_database_size()
function, also schedule another job to run for every 10 minutes, which run the VACUUM on the database. This VACUUM job takes care of reclaiming the dead tuples logically at database level. However, in production servers, we will not deploy the VACUUM job to run for every 10 minutes, as the autovacuum process takes care of the dead tuples. As this test is not for database performance benchmarking, we can also make autovacuum more aggressive on the database side as well.
How it works...
Once we find the average disk growth per day, we can predict the database growth for the next 1 or 2 years. However, the database write rate also increases with the business growth. So, we need to deploy the database growth script or we need to analyze any disk storage trends from the monitoring tool to make a better prediction of the storage size.
- Hands-On Deep Learning with Apache Spark
- Getting Started with Oracle SOA B2B Integration:A Hands-On Tutorial
- Visual C# 2008開發(fā)技術(shù)詳解
- 數(shù)據(jù)產(chǎn)品經(jīng)理:解決方案與案例分析
- 工業(yè)機(jī)器人入門實(shí)用教程(KUKA機(jī)器人)
- 大學(xué)計(jì)算機(jī)應(yīng)用基礎(chǔ)
- Visual C++編程全能詞典
- Learning C for Arduino
- 嵌入式操作系統(tǒng)
- 云原生架構(gòu)進(jìn)階實(shí)戰(zhàn)
- HTML5 Canvas Cookbook
- 深度學(xué)習(xí)與目標(biāo)檢測
- 人工智能技術(shù)入門
- Xilinx FPGA高級設(shè)計(jì)及應(yīng)用
- 網(wǎng)站規(guī)劃與網(wǎng)頁設(shè)計(jì)