- PostgreSQL Replication(Second Edition)
- Hans Jürgen Sch?nig
- 4980字
- 2021-07-16 13:33:49
How PostgreSQL writes data
PostgreSQL replication is all about writing data. Therefore, the way PostgreSQL writes a chunk of data internally is highly relevant and directly connected to replication and replication concepts. In this section, we will dig into writes.
The PostgreSQL disk layout
One of the first things we want to take a look at in this chapter is the PostgreSQL disk layout. Knowing about the disk layout can be very helpful when inspecting an existing setup, and it can be helpful when designing an efficient, high-performance installation.
In contrast to other database systems, such as Oracle, PostgreSQL will always rely on a filesystem to store data. PostgreSQL does not use raw devices. The idea behind this is that if a filesystem developer has done their job well, there is no need to reimplement the filesystem functionality over and over again.
Looking into the data directory
To understand the filesystem layout used by PostgreSQL, we can take a look at what we can find inside the data directory (created by initdb
at the time of installation):
hs@chantal:/data/db94$ ls -l
total 60
drwx------ 10 hs hs 102 Feb 4 11:40 base
drwx------ 2 hs hs 4096 Feb 11 14:29 global
drwx------ 2 hs hs 17 Dec 9 10:58 pg_clog
drwx------ 2 hs hs 6 Dec 9 10:58 pg_dynshmem
-rw------- 1 hs hs 4450 Dec 9 10:58 pg_hba.conf
-rw------- 1 hs hs 1636 Dec 9 10:58 pg_ident.conf
drwx------ 4 hs hs 37 Dec 9 10:58 pg_logical
drwx------ 4 hs hs 34 Dec 9 10:58 pg_multixact
drwx------ 2 hs hs 17 Feb 11 14:28 pg_notify
drwx------ 3 hs hs 22 Feb 11 15:15 pg_replslot
drwx------ 2 hs hs 6 Dec 9 10:58 pg_serial
drwx------ 2 hs hs 6 Dec 9 10:58 pg_snapshots
drwx------ 2 hs hs 6 Feb 11 14:28 pg_stat
drwx------ 2 hs hs 108 Feb 12 13:12 pg_stat_tmp
drwx------ 2 hs hs 17 Dec 9 10:58 pg_subtrans
drwx------ 2 hs hs 6 Dec 9 10:58 pg_tblspc
drwx------ 2 hs hs 6 Dec 9 10:58 pg_twophase
-rw------- 1 hs hs 4 Dec 9 10:58 PG_VERSION
drwx------ 3 hs hs 4096 Feb 3 11:37 pg_xlog
-rw------- 1 hs hs 88 Dec 9 10:58 postgresql.auto.conf
-rw------- 1 hs hs 21264 Feb 11 14:28 postgresql.conf
-rw------- 1 hs hs 47 Feb 11 14:28 postmaster.opts
-rw------- 1 hs hs 68 Feb 11 14:28 postmaster.pid
You will see a range of files and directories, which are needed to run a database instance. Let's take a look at them in detail.
PG_VERSION – the PostgreSQL version number
The PG_VERSION
file will tell the system at startup whether the data directory contains the correct version number. Note that only the major release version is in this file. It is easily possible to replicate between different minor versions of the same major version (for example, inside the 9.3 or 9.4 series):
[hs@paulapgdata]$ cat PG_VERSION
9.2
The file is plain text readable.
base – the actual data directory
The base
directory is one of the most important things in our data directory. It actually contains the real data (that is, tables, indexes, and so on). Inside the base
directory, each database will have its own subdirectory:
[hs@paula base]$ ls -l
total 24
drwx------ 2 hs hs 8192 Dec 9 10:58 1
drwx------ 2 hs hs 8192 Dec 9 10:58 12180
drwx------ 2 hs hs 8192 Feb 11 14:29 12185
drwx------ 2 hs hs 4096 Feb 11 18:14 16384
We can easily link these directories to the databases in our system. It is worth noticing that PostgreSQL uses the object ID of the database here. This has many advantages over using the name, because the object ID never changes and offers a good way to abstract all sorts of problems, such as issues with different character sets on the server:
test=# SELECT oid, datname FROM pg_database;
oid |datname
-------+-----------
1 | template1
12180 | template0
12185 | postgres
16384 | test
(4 rows)
Now we can see how data is stored inside those database-specific directories. In PostgreSQL, each table is related to (at least) one data file. Let's create a table and see what happens:
test=# CREATE TABLE t_test (id int4);
CREATE TABLE
We can check the system table now to retrieve the so-called relfilenode
variable, which represents the name of the storage file on the disk:
test=# SELECT relfilenode, relname
FROM pg_class
WHERE relname = 't_test';
relfilenode | relname
-------------+---------
16385 | t_test
(1 row)
Note that relfilenode
can change if TRUNCATE
or similar commands occur on a certain table.
As soon as the table is created, PostgreSQL will create an empty file on the disk:
[hs@paula base]$ ls -l 16384/16385*
-rw------- 1 hs staff 0 Feb 12 12:06 16384/16385
Growing data files
Tables can sometimes be quite large, and therefore it is not wise to put all of the data related to a table into a single data file. To solve this problem, PostgreSQL will add more files every time 1 GB of data has been added.
So, if the file called 16385
grows beyond 1 GB, there will be a file called 16385.1
; once this has been filled, you will see a file named 16385.2
; and so on. In this way, a table in PostgreSQL can be scaled up reliably and safely without having to worry too much about the underlying filesystem limitations of some rare operating systems or embedded devices (Most modern filesystems can handle a large number of files efficiently. However, not all filesystems have been created as equals.)
Performing I/O in chunks
To improve I/O performance, PostgreSQL will usually perform I/O in 8 K chunks. Thus, you will see that your data files will always grow in steps of 8 K each. When considering physical replication, you have to make sure that both sides (master and slave) are compiled with the same block size.
Tip
Unless you have explicitly compiled PostgreSQL on your own using different block sizes, you can always rely on the fact that block sizes will be identical and exactly 8 K.
Relation forks
Other than the data files discussed in the previous paragraph, PostgreSQL will create additional files using the same relfilenode
number. Till now, those files have been used to store information about free space inside a table (Free Space Map
), the so-called Visibility Map
file, and so on. In future, more types of relation forks might be added.
global – the global data
The global
directory will contain the global system tables. This directory is small, so you should not expect excessive storage consumption.
Dealing with standalone data files
There is one thing that is often forgotten by users: a single PostgreSQL data file is basically more or less worthless. It is hardly possible to restore data reliably if you just have a data file; trying to extract data from single data files can easily end up as hopeless guesswork. Without the transaction log infrastructure, data files are usually meaningless. So, in order to read data, you need an instance that is more or less complete.
pg_clog – the commit log
The commit
log is an essential component of a working database instance. It stores the status of the transactions on this system. A transaction can be in four states: TRANSACTION_STATUS_IN_PROGRESS
, TRANSACTION_STATUS_COMMITTED
, TRANSACTION_STATUS_ABORTED
, and TRANSACTION_STATUS_SUB_COMMITTED
. If the commit log status for a transaction is not available, PostgreSQL will have no idea whether a row should be seen or not. The same applies to the end user, of course.
If the commit log of a database instance is broken for some reason (maybe because of filesystem corruption), you can expect some funny hours ahead.
Tip
If the commit log is broken, we recommend that you snapshot the database instance (filesystem) and fake the commit log. This can sometimes help retrieve a reasonable amount of data from the database instance in question. Faking the commit log won't fix your data—it might just bring you closer to the truth. This faking can be done by generating a file as required by the clog infrastructure (see the documentation).
pg_dynshmem – shared memory
This "shared memory" is somewhat of a misnomer, because what it is really doing is creating a bunch of files and mapping them to the PostgreSQL address space. So it is not system-wide shared memory in a classical sense. The operating system may feel obliged to synchronize the contents to the disk, even if nothing is being paged out, which will not serve us well. The user can relocate the pg_dynshmem
directory to a RAM disk, if available, to avoid this problem.
pg_hba.conf – host-based network configuration
The pg_hba.conf
file configures the PostgreSQL's internal firewall and represents one of the two most important configuration files in a PostgreSQL cluster. It allows users to define various types of authentication based on the source of a request. To a database administrator, understanding the pg_hba.conf
file is of vital importance because this file decides whether a slave is allowed to connect to the master or not. If you happen to miss something here, you might see error messages in the slave's logs (for instance, no pg_hba.conf entry for ...
).
pg_ident.conf – ident authentication
The pg_ident.conf
file can be used in conjunction with the pg_hba.conf
file to configure ident
authentication.
pg_logical – logical decoding
The pg_logical
directory, information for logical decoding is stored (snapshots and the like).
pg_multixact – multitransaction status data
The multiple-transaction-log manager handles shared row locks efficiently. There are no replication-related practical implications of this directory.
pg_notify – LISTEN/NOTIFY data
In the pg_notify
directory, the system stores information about LISTEN
/NOTIFY
(the async backend interface). There are no practical implications related to replication.
pg_replslot – replication slots
Information about replication slots is stored in the pg_replslot
directory.
pg_serial – information about committed serializable transactions
Information about serializable transactions is stored in pg_serial
directory. We need to store information about commits of serializable transactions on the disk to ensure that long-running transactions will not bloat the memory. A simple Segmented Least Recently Used (SLRU) structure is used internally to keep track of these transactions.
pg_snapshot – exported snapshots
The pg_snapshot
file consists of information needed by the PostgreSQL snapshot manager. In some cases, snapshots have to be exported to the disk to avoid going to the memory. After a crash, these exported snapshots will be cleaned out automatically.
pg_stat – permanent statistics
The pg_stat
file contains permanent statistics for the statistics subsystem.
pg_stat_tmp – temporary statistics data
Temporary statistical data is stored in the pg_stst_tmp
file. This information is needed for most pg_stat_*
system views (and therefore, it is also needed for the underlying function providing the raw data).
pg_subtrans – subtransaction data
In this directory, we store information about subtransactions. The pg_subtrans
(and pg_clog
) directories are a permanent (on-disk) storage of transaction-related information. There are a limited number of pages of directories kept in the memory, so in many cases, there is no need to actually read from the disk. However, if there's a long-running transaction or a backend sitting idle with an open transaction, it may be necessary to be able to read and write this information to the disk. These directories also allow the information to be permanent across server restarts.
pg_tblspc – symbolic links to tablespaces
The pg_tblspc
directory is a highly important directory. In PostgreSQL, a tablespace is simply an alternative storage location that is represented by a directory holding the data.
The important thing here is that if a database instance is fully replicated, we simply cannot rely on the fact that all the servers in the cluster use the same disk layout and the same storage hardware. There can easily be scenarios in which a master needs a lot more I/O power than a slave, which might just be around to function as backup or standby. To allow users to handle different disk layouts, PostgreSQL will place symlinks in the pg_tblspc
directory. The database will blindly follow those symlinks to find the tablespaces, regardless of where they are.
This gives end users enormous power and flexibility. Controlling storage is essential to both replication as well as performance in general. Keep in mind that those symlinks can only be changed post transaction(users can do that if a slave server in a replicated setup does not use the same filesystem layout). This should be carefully thought over.
Tip
We recommend using the trickery outlined in this section only when it is really needed. For most setups, it is absolutely recommended to use the same filesystem layout on the master as well as on the slave. This can greatly reduce the complexity of backups and replay. Having just one tablespace reduces the workload on the administrator.
pg_twophase – information about prepared statements
PostgreSQL has to store information about two-phase commit. While two-phase commit can be an important feature, the directory itself will be of little importance to the average system administrator.
pg_xlog – the PostgreSQL transaction log (WAL)
The PostgreSQL transaction log is the essential directory we have to discuss in this chapter. The pg_xlog
log contains all the files related to the so-called XLOG. If you have used PostgreSQL in the past, you might be familiar with the term Write-Ahead Log (WAL). XLOG and WAL are two names for the same thing. The same applies to the term transaction log. All of these three terms are widely in use, and it is important to know that they actually mean the same thing.
The pg_xlog
directory will typically look like this:
[hs@paulapg_xlog]$ ls -l
total 81924
-rw------- 1 hs staff 16777216 Feb 12 16:29 000000010000000000000001
-rw------- 1 hs staff 16777216 Feb 12 16:29 000000010000000000000002
-rw------- 1 hs staff 16777216 Feb 12 16:29 000000010000000000000003
-rw------- 1 hs staff 16777216 Feb 12 16:29 000000010000000000000004
-rw------- 1 hs staff 16777216 Feb 12 16:29 000000010000000000000005
drwx------ 2 hs staff 4096 Feb 11 18:14 archive_status
What you see is a bunch of files that are always exactly 16 MB in size (the default setting). The filename of an XLOG file is generally 24 bytes long. The numbering is always hexadecimal. So, the system will count "… 9, A, B, C, D, E, F, 10" and so on.
One important thing to mention is that the size of the pg_xlog
directory will not vary wildly over time, and it is totally independent of the type of transactions you are running on your system. The size of the XLOG is determined by the postgresql.conf
parameters, which will be discussed later in this chapter. In short, no matter whether you are running small or large transactions, the size of the XLOG will be the same. You can easily run a transaction as big as 1 TB with just a handful of XLOG files. This might not be too efficient performance wise, but it is technically perfectly feasible.
postgresql.conf – the central PostgreSQL configuration file
Finally, there is the main PostgreSQL configuration file. All configuration parameters can be changed in postgresql.conf
, and we will use this file extensively to set up replication and tune our database instances to make sure that our replicated setups provide us with superior performance.
Tip
If you happen to use prebuilt binaries, you might not find postgresql.conf
directly inside your data directory. It is more likely to be located in some subdirectory of /etc/
(on Linux/Unix) or in your place of choice in Windows. The precise location is highly dependent on the type of operating system you are using. The typical location of data directories is /var/lib/pgsql/data
, but postgresql.conf
is often located under /etc/postgresql/9.X/main/postgresql.conf
(as in Ubuntu and similar systems), or under /etc
directly.
Writing one row of data
Now that we have gone through the disk layout, we will pe further into PostgreSQL and see what happens when PostgreSQL is supposed to write one line of data. Once you have mastered this chapter, you will have fully understood the concept behind the XLOG.
Note that in this section, which is about writing a row of data, we have simplified the process a little to make sure that we can stress the main point and the ideas behind the PostgreSQL XLOG.
A simple INSERT statement
Let's assume that we are using a simple INSERT
statement, like the following:
INSERT INTO foo VALUES ('abcd'):
As one might imagine, the goal of an INSERT
operation is to somehow add a row to an existing table. We have seen in the previous section—the section about the disk layout of PostgreSQL—that each table will be associated with a file on the disk.
Let's perform a mental experiment and assume that the table we are dealing with here is 10 TB large. PostgreSQL will see the INSERT
operation and look for some spare place inside this table (either using an existing block or adding a new one). For the purpose of this example, we have simply put the data into the second block of the table.
Everything will be fine as long as the server actually survives the transaction. What happens if somebody pulls the plug after just writing abc
instead of the entire data? When the server comes back up after the reboot, we will find ourselves in a situation where we have a block with an incomplete record, and to make it even funnier, we might not even have the slightest idea where this block containing the broken record might be.
In general, tables containing incomplete rows in unknown places can be considered to be corrupted tables. Of course, systematic table corruption is nothing the PostgreSQL community would ever tolerate, especially not if problems like that are caused by clear design failures.
Tip
We have to ensure that PostgreSQL will survive interruptions at any given point in time without losing or corrupting data. Protecting your data is not something nice to have but an absolute must-have. This is what is commonly referred to as the "D" in Atomicity, Consistency, Isolation, and Durability (ACID).
To fix the problem that we have just discussed, PostgreSQL uses the so-called WAL or simply XLOG. Using WAL means that a log is written ahead of data. So, before we actually write data to the table, we make log entries in a sequential order, indicating what we are planning to do to our underlying table. The following diagram shows how things work in WAL:

As we can see from this diagram, once we have written data to the log in (1), we can go ahead and mark the transaction as done in (2). After that, data can be written to the table, as marked with (3).
Note
We have left out the memory part of the equation; this will be discussed later in this section.
Let's demonstrate the advantages of this approach with two examples:
- Crashing during WAL writing
- Crashing after WAL writing
Crashing during WAL writing
To make sure that the concept described in this chapter is rock-solid and working, we have to make sure that we can crash at any point in time without risking our data. Let's assume that we crash while writing the XLOG. What will happen in this case? Well, in this case, the end user will know that the transaction was not successful, so they will not rely on the success of the transaction anyway.
As soon as PostgreSQL starts up, it can go through the XLOG and replay everything necessary to make sure that PostgreSQL is in a consistent state. So, if we don't make it through WAL-writing, it means that something nasty has happened and we cannot expect a write to be successful.
A WAL entry will always know whether it is complete or not. Every WAL entry has a checksum inside, and therefore PostgreSQL can instantly detect problems if somebody tries to replay a broken WAL. This is especially important during a crash, when we might not be able to rely on the very latest data written to the disk. The WAL will automatically sort those problems during crash recovery.
Tip
If PostgreSQL is configured properly (fsync = on
, and so on), crashing is perfectly safe at any point in time (unless hardware is damaged of course; a malfunctioning RAM and so on are, of course, always a risk).
Crashing after WAL writing
Let's now assume we have made it through WAL writing and a crash happens shortly after that, maybe while writing to the underlying table. What if we only manage to write ab
instead of the entire data (which is abcd
in this example)?
Well, in this case, we will know during replay what is missing. Again, we go to the WAL and replay what is needed to make sure that all of the data is safely in our data table.
While it might be hard to find data in a table after a crash, we can always rely on the fact that we can find data in the WAL. The WAL is sequential, and if we simply keep a track of how much data has been written, we can always continue from there. The XLOG will lead us directly to the data in the table, and it always knows where a change has been made or should have been made. PostgreSQL does not have to search for data in the WAL; it just replays it from the proper point onward. Keep in mind that replaying XLOG is very efficient and can be a lot faster than the original write to the master.
Note
Once a transaction has made it to the WAL, it cannot be lost easily any more.
Read consistency
Now that we have seen how a simple write is performed, we will see what impact writes have on reads. The next diagram shows the basic architecture of the PostgreSQL database system:

For the sake of simplicity, we can see a database instance as an entity consisting of three major components:
- PostgreSQL data files
- The transaction log
- Shared buffer
In the previous sections, we have already discussed data files. You have also read some basic information about the transaction log itself. Now we have to extend our model and bring another component into the scenery—the memory component of the game, which is the shared buffer.
The purpose of the shared buffer
The shared buffer is the I/O cache of PostgreSQL. It helps cache 8K blocks, which are read from the operating system, and also helps hold back writes to the disk to optimize efficiency (how this works will be discussed later in this chapter).
Note
The shared buffer is important as it affects performance.
However, performance is not the only issue we should be focused on when it comes to the shared buffer. Let's assume that we want to issue a query. For the sake of simplicity, we also assume that we need just one block to process this read request.
What happens if we perform a simple read? Maybe we are looking for something simple, such as a phone number or a username, given a certain key. The following list shows in a highly simplified way what PostgreSQL will do under the assumption that the instance has been freshly restarted:
- PostgreSQL will look up the desired block in the cache (as stated before, this is the shared buffer). It will not find the block in the cache of a freshly started instance.
- PostgreSQL will ask the operating system for the block.
- Once the block has been loaded from the OS, PostgreSQL will put it into the first queue of the cache.
- The query will be served successfully.
Let's assume that the same block will be used again, this time by a second query. In this case, things will work as follows:
- PostgreSQL will look up the desired block and come across a cache hit.
- Then PostgreSQL will figure out that a cached block has been reused, and move it from a lower level of cache (Q1) to a higher level of cache (Q2). Blocks that are in the second queue will stay in the cache longer, because they have proven to be more important than those that are only at the Q1 level.
Tip
How large should the shared buffer be? Under Linux, a value of up to 8 GB is usually recommended. On Windows, values below 1 GB have been proven to be useful (as of PostgreSQL9.2). From PostgreSQL 9.3 onwards, higher values might be useful and feasible under Windows. Insanely large shared buffers on Linux can actually be a deoptimization. Of course, this is only a rule of thumb; special setups might need different settings. Also keep in mind that some work goes on in PostgreSQL constantly and the best practices might vary over time.
Mixed reads and writes
Remember that in this section, it is all about understanding writes to make sure that our ultimate goal—full and deep understanding of replication—can be achieved. Therefore, we have to see how reads and writes go together:
- A write will come in.
- PostgreSQL will write to the transaction log to make sure that consistency can be reached.
- Then PostgreSQL will grab a block inside the PostgreSQL shared buffer and make the change in the memory.
- A read will come in.
- PostgreSQL will consult the cache and look for the desired data.
- A cache hit will be landed and the query will be served.
What is the point of this example? Well! As you might have noticed, we have never talked about actually writing to the underlying table. We talked about writing to the cache, to the XLOG, and so on, but never about the real data file.
Tip
In this example, whether the row we have written is in the table or not is totally irrelevant. The reason is simple; if we need a block that has just been modified, we will never make it to the underlying table anyway.
It is important to understand that data is usually not sent to a data file directly after or during a write operation. It makes perfect sense to write data a lot later to increase efficiency. The reason this is important is that it has subtle implications on replication. A data file itself is worthless because it is neither necessarily complete nor correct. To run a PostgreSQL instance, you will always need data files along with the transaction log. Otherwise, there is no way to survive a crash.
From a consistency point of view, the shared buffer is here to complete the view a user has of the data. If something is not in the table, logically, it has to be in memory.
In the event of a crash, the memory will be lost, and so the XLOG will be consulted and replayed to turn data files into a consistent data store again. Under all circumstances, data files are only half of the story.
Note
In PostgreSQL 9.2 and before, the shared buffer was exclusively in the SysV/POSIX shared memory or simulated SysV on Windows. PostgreSQL 9.3 (already released at the time of writing this book) started using memory-mapped files. This is a lot faster under Windows and makes no difference in performance under Linux, but is slower under BSDs. BSD developers have already started fixing this. Moving to mmap was done to make configuration easier, because mmap is not limited by the operating system. It is unlimited as long as enough RAM is around. SysV's memory is limited and a high amount of it can usually be allocated only if the operating system is tweaked accordingly. The default configuration of shared memory varies from distribution to distribution in the case of Linux. SUSE tends to be a bit more relaxed, while Red Hat, Ubuntu, and some others tend to be more conservative.
The format of the XLOG
Many users have asked me during consulting or training sessions how the XLOG is really organized internally. As this information is rarely described in books, I decided to include a small section about this internal information here, hoping that you will find this little excursion into PostgreSQL's internals interesting and useful.
Basically, an XLOG entry identifies the object it is supposed to change using three variables:
- The OID (object id) of the database
- The OID of the tablespace
- The OID of the underlying data file
This triplet is a unique identifier for any data-carrying object in the database system. Depending on the type of operation, various types of records are used (commit records, B-tree changes, heap changes, and so on).
In general, the XLOG is a stream of records lined up one after the other. Each record is identified by the location in the stream. As already mentioned in this chapter, a typical XLOG file is 16 MB in size (unless changed at compile time). Inside those 16 MB segments, data is organized in 8 K blocks. XLOG pages contain a simple header consisting of:
- The 16-bit "magic" value
- Flag bits
- The timeline ID
- The XLOG position of this page
- The length of data remaining from the last record on the previous page
In addition to this, each segment (16 MB file) has a header consisting of various fields as well:
- System identifier
- Segment size and block size
The segment and block size are mostly available to check the correctness of the file.
Finally, each record has a special header with following contents:
- The XLOG record structure
- The total length of the record
- The transaction ID that produced the record
- The length of record-specific data, excluding header and backup blocks
- Flags
- The record type (for example, XLOG checkpoint, transaction commit, and B-tree insert)
- The start position of previous record
- The checksum of this record
- Record-specific data
- Full-page images
XLOG addresses are highly critical and must not be changed, otherwise the entire system breaks down.
The data structure outlined in this chapter is as of PostgreSQL 9.4. It is quite likely that changes will happen in PostgreSQL 9.5 and beyond.
- 精通JavaScript+jQuery:100%動態(tài)網(wǎng)頁設(shè)計密碼
- Rust編程:入門、實戰(zhàn)與進(jìn)階
- 三維圖形化C++趣味編程
- Java游戲服務(wù)器架構(gòu)實戰(zhàn)
- Easy Web Development with WaveMaker
- Jupyter數(shù)據(jù)科學(xué)實戰(zhàn)
- Learning Laravel's Eloquent
- MySQL從入門到精通(軟件開發(fā)視頻大講堂)
- Spring Boot+Vue全棧開發(fā)實戰(zhàn)
- Node.js開發(fā)指南
- SciPy Recipes
- Python機(jī)器學(xué)習(xí)與量化投資
- 大學(xué)計算機(jī)應(yīng)用基礎(chǔ)(Windows 7+Office 2010)(IC3)
- AutoCAD基礎(chǔ)教程
- Python面試通關(guān)寶典