AWS-POSTGRES AURORA Best Practices

Database best practices.

1. Indexes missing , drop unused indexes to avoid overhead for the auto vaccum

2. Auto vaccum logging should be enabled and reviewed regularly.

When auto vacuum is enabled, this process has the responsibility of the auto vacuum daemon to carry vacuum operations on bloated tables. This process relies on the stats collector process for perfect table analysis. 

3. Go through the links for the best practice, come with some guideline documentation

https://aws.amazon.com/blogs/database/understanding-autovacuum-in-amazon-rds-for-postgresql-environments/

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.PostgreSQL.CommonDBATasks.Autovacuum.html4. avoid cluster index for postgres, since no use after sql server migration to postgres.

5. high amount file reads, data buffer, review and add indexes for those tables.

6. partition should be done based on the right key. only then read can be better, index and vaccum will improve, it should not scan all partitions

   for index.

7. consider upgrading to version 12 , however only after rds proxy is available for postgres 12, AWS should provide the deadline.

   a) In postgres 12 , locking is much more relaxed, contention is reduced when we have large number of partitions.

   b) Take note: partioning to be implemented only after upgrade to version 12.8. AWS team will share best practices for upgrades from 11 to 12.

9. table for index monitoring: pgstatuserindexes, based on this table we will know which are the indexes are in used. they should not be part of unique    constraint or primary key constraint.

10. Have to check the queries with explain plan before removing and adding the index back.


For this to happen create a clone environment

a. test the explian plan

b. test the query

11. apg_ccm - cluster level  / cluster cache management. failover .... (check with the team who can retrieve this setting from the production)

    raise a ticket with AWS team, and findout where this setting is residing, is it db cluster or where.    points to take note:

    1 writer, 1 writer/reader should be dedicated for the failover.. should not use existing readers used by the application.

    out of the current 4 readers, we need to remove 1 reader, and keep it for failover for the writer, keep it as a separate end point for failover.12. perform cloning of dbs to uat by Creation of clones automatically using cloudformation.13. These are the reasons for load not distributed evenly.

1. dns cache - only connection life time .. (application is cahing dns entries), time to live -  5 mins which is the connection pooling, dns cahce set in jvm settings. (find out this settings in the API ec2 instance, what is the value, but better to retain the dns cache)

   .net we have to check for dns cahce.

2. application pooling12. RDS proxy how to setup ?

For RDS prox two end points needed,  only for postgres 11 it is supported.

1 will be for writer -

1 will be for reader -13. postgres 12 - rds proxy -> not supported, no deadline yet, need to check with AWS.

Objects that are not supported by PostgreSQL.

Oracle vs PostgreSQL 

Indexes when migrating from Oracle to PostgreSQL