- Mastering PostgreSQL 10
- Hans Jürgen Sch?nig
- 547字
- 2021-06-30 19:03:55
Reducing space consumption
Indexing is nice and its main purpose is to speed up things as much as possible. As with all the good stuff, indexing comes with a price tag: space consumption. To do its magic, an index has to store values in an organized fashion. If your table contains 10 million integer values, the index belonging to the table will logically contain these 10 million integer values plus additional overhead.
A b-tree will contain a pointer to each row in the table, and so it is certainly not free of charge. To figure out how much space an index will need, you can ask the psql using the di+ command:
test=# \di+ List of relations
Schema | Name | Type | Owner | Table | Size
--------+------------+-------+-------+----------+-------
public | idx_cos | index | hs | t_random | 86 MB
public | idx_id | index | hs | t_test | 86 MB
public | idx_name | index | hs | t_test | 86 MB
public | idx_random | index | hs | t_random | 86 MB
(4 rows)
In my database, the staggering amount of 344 MB has been burned to store these indexes. Now, compare this to the amount of storage burned by the underlying tables:
test=# \d+ List of relations
Schema | Name | Type | Owner | Size
--------+---------------+----------+-------+------------
public | t_random | table | hs | 169 MB
public | t_test | table | hs | 169 MB
public | t_test_id_seq | sequence | hs | 8192 bytes
(3 rows)
The size of both tables combined is just 338 MB. In other words, our indexing needs more space than the actual data. In the real world, this is common and actually pretty likely. Recently, I visited a Cybertec customer in Germany and I saw a database in which 64% of the database size was made up of indexes that were never used (not a single time over the period of months). So, over-indexing can be an issue just like under-indexing. Remember, these indexes don't just consume space. Every INSERT or UPDATE must maintain the values in the indexes as well. In extreme cases, like our example, this vastly decreases write throughput.
If there are just a handful of different values in the table, partial indexes are a solution:
test=# DROP INDEX idx_name;
DROP INDEX
test=# CREATE INDEX idx_name ON t_test (name)
WHERE name NOT IN ('hans', 'paul');
CREATE INDEX
In this case, the majority has been excluded from the index and a small, efficient index can be enjoyed:
test=# \di+ idx_name List of relations
Schema | Name | Type | Owner | Table | Size
--------+----------+-------+-------+--------+-----------
public | idx_name | index | hs | t_test | 8192 bytes
(1 row)
Note that it only makes sense to exclude very frequent values that make up a large part of the table (at least 25% or so). Ideal candidates for partial indexes are gender (we assume that most people are male or female), nationality (assuming that most people in your country have the same nationality), and so on. Of course, applying this kind of trickery requires some deep knowledge of your data, but it certainly pays off.
- 機器學習實戰:基于Sophon平臺的機器學習理論與實踐
- 工業機器人虛擬仿真實例教程:KUKA.Sim Pro(全彩版)
- 精通MATLAB圖像處理
- PostgreSQL 11 Server Side Programming Quick Start Guide
- Dreamweaver CS3網頁制作融會貫通
- 數據庫原理與應用技術學習指導
- Security Automation with Ansible 2
- CompTIA Network+ Certification Guide
- AI 3.0
- 可編程序控制器應用實訓(三菱機型)
- PostgreSQL 10 Administration Cookbook
- Statistics for Data Science
- TensorFlow Reinforcement Learning Quick Start Guide
- C#求職寶典
- Learning Cassandra for Administrators