1 / 62

POSTGRESQL DATABASE

POSTGRESQL DATABASE. Table of Contents. Introduction Features Architecture Configuration Parameters User Accessible files Backups Restore and Recovery. Introduction. PostgreSQL is an object-relational database management system(ORDBMS)

Jimmy
Télécharger la présentation

POSTGRESQL DATABASE

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. POSTGRESQL DATABASE

  2. Table of Contents • Introduction • Features • Architecture • Configuration Parameters • User Accessible files • Backups • Restore and Recovery

  3. Introduction • PostgreSQL is an object-relational database management system(ORDBMS) • It is developed at the University of California at Berkeley Computer Science Department. • It runs on all major operating systems, including LINUX, UNIX and WINDOWS.

  4. Features • Foreign Data Wrappers • BRIN Indexing • Sorting optimization • Row Level Security • Multi Version Concurrency Control (MVCC) • Routine Vacuuming • Parallel VACUUMing

  5. PostgreSQL Architecture

  6. Architecture Fundamentals • PostgreSQL isso called object-oriented because it consists of many sub systems that can interact and work together to access the database. • These sub systems are: • Client Processes. • Server Processes. • Database Control. • Utility Processes.

  7. Conceptual Architecture

  8. Client Processes • It is the first sub system of PostgreSQL architecture and has two sub components. • Client application . • Client Interface Library. • Client Application: It is whichever medium the user is using to interact with the database. • Client Interface Library: For the database to understand exactly what the user wants, the interface library is used. It is an API, a list of different applications that can talk to the database.

  9. Server Processes • The PostgreSQL server is made of two separate objects. • Postmaster. • Backend Server Process(postgres server).

  10. Postmaster

  11. Postmaster • The postmaster receives all front end requests to access the database and then chooses a backend instance to match up the front end. • It does this by creating the backend instance and then link front end to it. • After this link is established, the front end and the backend instance can then communicate without having to go through postmaster, freeing it up to listen more requests. • When postmaster is not currently receiving a call or creating a backend process, it sits operational, waiting for another event to occur. • Whereas backend instance that is started by postmaster and gets ended when the user logs off.

  12. Postgres Server(Backend Process)

  13. Postgres Server(Backend Process) • Generally it uses a pipe filter architecture. This is because each component of server performs a highly specific task on input and passes its result to a successor in a sequential order. • At the beginning it is provided with a query, and incrementally transforms it into a set of data that is provided to the client. • Each process makes use of a separate data table, whether it be for symbols or libraries. • In fact, the Executor Stage even goes so far as to access the data tables themselves.

  14. Postgres Server(Backend Process)

  15. Postgres Server(Backend Process) • Whenever a client passes a query through the server process, it undergoes the following stages: • Parsing Stage • Planning Stage • Executor Stage

  16. Postgres Server(Backend Process) • Parsing Stage: In this stage the client sends the input and the parser parses it, and does the following jobs: • Checks the SQL Syntax • Makes a look in the catalog. • Builds the query tree

  17. Postgres Server(Backend Process) • Traffic Cop: • The traffic cop is the agent that is responsible for differentiating between simple and complex query commands. • Transaction control commands such as BEGIN and ROLLBACK are simple enough so as to not need additional processing, whereas other commands such as SELECT and JOIN are passed on to the rewriter. • This discrimination reduces the processing time by performing minimal optimization on the simple commands, and devoting more time to the complex ones.

  18. Postgres Server(Backend Process) • Rewriter: • The beginning of this stage is composed of a rewriting of the parse tree received from the Parsing Stage. • This involves expansion of subqueries into lower order commands.

  19. Postgres Server(Backend Process) • Planner : The planner does the following: • If there is a certain number of possibilities less than a threshold amount, it will exhaustively consider every possible path of the parse tree and find the one that is least computationally complex. • Otherwise it will use a genetic algorithm to find an efficient path

  20. Postgres Server(Backend Process) • Executor Stage : The executor in this stage receives the plan tree from the planner and does the following: • It extracts the data necessary from the database tables. • The executor recursively goes through plan, which is a tree, and performs the required action at each node. If information about the sub-path is necessary, the executor will proceed to the children before executing the node, otherwise the node’s action can be executed immediately. • At this stage, “the executor mechanism is used to evaluate all four basic SQL query types: SELECT, INSERT, UPDATE, and DELETE.”The output is delivered back to the client.

  21. Database Control

  22. Database Control • Memory architecture in PostgreSQL can be classified into two broad categories: • Local memory area – allocated by each backend process for its own use. • Shared memory area – used by all processes of a PostgreSQL server. • In the following subsections, those are briefly described.

  23. Shared Memory Area

  24. Shared Memory Area • We need shared memory in postgreSQL to mainly enable communication between postmaster and postgres server processesas : • child processes cannot propagate information. • shared memory is available to share the same information to all processes • And also we need shared memory to cache pages.

  25. Shared Memory Area • While the backend server instances take care of clients, there are many services that are offered over the entire system. • These utilities are used by most of the subsystems in PostgreSQL

  26. Shared Memory Area

  27. Shared Memory Creation • When the server handles the incoming request, the postmaster calls fork() to create a child for handling the incoming request. • So here we have the entire copy of the postmaster process except the “pid” configuration parameter. • A shared memory area is allocated by a PostgreSQL server when it starts up. • This area is also divided into several fix sized sub-areas.

  28. Shared Memory Area • Databases are mainly a series of tables. Each table gets a sub directory, which consists of number of files. • A single file holds up to 1GB of data (staying well below the32-bit 2GB size limit). The file is treated as series of 8K blocks. • Shared buffers sets the size of cache. The buffer cache is a simple array of that size. • Each cache entry points to an 8KB block (sometimes called a page) of data. • Each buffer entry has a tag. The tag says what file (and therefore table) this entry is buffering and which block of that file it contains. • When a process wants a buffer, it asks BufferAlloc for the file/block.

  29. Shared Memory Area

  30. Shared Memory Area

  31. Shared Memory Area

  32. Shared Memory Area

  33. Tuples • Tuple headers contain: • xmin: transaction ID of inserting transaction • xmax: transaction ID of replacing/deleting transaction (initially NULL) • forward link: link to newer version of same logical row, if any • Basic idea: tuple is visible if xmin is valid and xmax is not. "Valid"means "either committed or the current transaction".

  34. Transaction Status in Log Files

  35. Shared Memory Area • Shared Buffers: • When pages from a table or index are read from the OS, they are read into shared_buffers. • The backends reference the pages and their contents right there in shared memory. • It's default value is 32MB. • It's minimum value is 128KB

  36. Shared Memory Area • WAL Buffers: • Theseare for buffering data to be written to the WAL files. • The amount of shared memory used for WAL data that has not yet been written to disk. • The default setting of -1 selects a size equal to 1/32nd (about 3%) of shared_buffers • The default value is -1.

  37. Shared Memory Area • CLOG Buffers: • These contain the status of the transactions in the xlog whether they are committed or rolled back. • It is used for recovering the server state in times of crash. • They can be recovered by simply replaying the uncommitted transactions or rolled back transactions.

  38. Shared Memory Area • Lock Space:

  39. Shared Memory Area • Lock Space : We can also use explicit locking at table level like BEGIN; LOCK TABLE my_accounts IN SHARE ROW EXCLUSIVE MODE; UPDATE my_accounts SET balance = balance - $withdrawal WHERE accountid = ' checking '; SELECT SUM(balance) FROM my_accounts; - - commit if sum >= 1000.00, else abort

  40. Local Memory Area

  41. Local Memory Area • Each backend process allocates a local memory area for query processing. • Each area is divided into several sub-areas – whose sizes are either fixed or variable.

  42. Local Memory Area • temp_buffers: • Sets the maximum number of temporary buffers used by each database session. • These are session local buffers used only for access to temporary tables. • The default is eight megabytes (8MB) . • The min value is 800 KB. • Executor uses this area for storing temporary tables.

  43. Local Memory Area • Work-mem: • Specifies the amount of memory to be used by internal sort operations and hash tables before writing to temporary disk files. • The value defaults to four megabytes (4MB). • Executor uses this area for sorting tuples by ORDER BY and DISTINCT operations, and for joining tables by merge-join and hash-join operations.

  44. Local Memory Area • maintenance_work_mem: • Specifies the maximum amount of memory to be used by maintenance operations, such as VACUUM, CREATE INDEX, and ALTER TABLE ADD FOREIGN KEY. • It defaults to 64 megabytes (64MB).

  45. Utility Process

  46. Utility Process • BG Writer: • There is a separate server process called the background writer, whose function is to issue writes of “dirty” (new or modified) shared buffers. • It writes shared buffers so server processes handling user queries need not to wait for a write to occur.

  47. BG Writer The parameters of BG writer are: • bgwriter_delay: • Specifies the delay between activity rounds for the background writer. • In each round the writer issues writes for some number of dirty buffers. • The default value is 200 milliseconds

  48. BG Writer • bgwriter_lru_maxpages : • In each round, no more than this many buffers will be written by the background writer. • The default value is 100 buffers. • bgwriter_lru_multiplier : • The number of dirty buffers written in each round is based on the number of new buffers that have been needed by server processes during recent rounds. • The average recent need is multiplied by bgwriter_lru_multiplier. • The default is 2.0.

  49. Utility Process • WAL Writer : • The main function of WAL writer is to flush WAL segments to disk. • The main point to be noted is that this will be far more than the checkpoint frequency. • The default value is 200ms.

  50. Utility Process • Auto Vacuum : • PostgreSQL has an optional but highly recommended feature called autovacuum, whose purpose is to automate the execution of VACUUM and ANALYZE commands. • When enabled, autovacuum checks for tables that have had a large number of inserted, updated or deleted tuples. • These checks use the statistics collection facility, therefore, autovacuum cannot be used unless “track_counts” is set to true.

More Related