PostgreSQL is a free and open-source relational database management system.
Sample commands were tested with Ubuntu 19.10.
A tool for easy online testing and sharing of database problems and their solutions.
https://jdbc.postgresql.org/download/
PostgreSQL service listens, by default, on port TCP 5432
sudo apt-get update
sudo apt-get install postgresql
Using terraform, set the new engine version.
Upgrade matrix for Aurora PostgreSQL DB:
Find out version we have:
aws rds describe-db-clusters --db-cluster-identifier "cluster-defensatf-dev" --region eu-west-1 --query 'DBClusters[*].EngineVersion' --output text
Find out to which versions we can upgrade:
aws rds describe-db-engine-versions --engine aurora-postgresql --region eu-west-1 --engine-version 10.18 --query 'DBEngineVersions[].ValidUpgradeTarget[?IsMajorVersionUpgrade == `true`].{EngineVersion:EngineVersion}' --output text
Upgrade to version 11.13, use apply-inmediatly (except if using the maintenance window is desired):
aws rds modify-db-cluster --db-cluster-identifier "cluster-defensatf-dev" --engine-version 11.13 --allow-major-version-upgrade --apply-immediately
Eg: Bumping engine version from 12.19 to 16.3 in a PostgreSQL database w/ role Instance.
a) In the AWS RDS console, choose the db to be upgraded and click 'Modify' in the actions dropdown menu
b) Choose the new PostgreSQL version. Click 'Continue'
c) Choose 'Apply immediately' and click 'Apply immediately'
If the certificate authority used by the RDS is no longer supported by AWS then it must be replaced BEFORE upgrading the engine version
aws rds modify-db-instance --db-instance-identifier postgresql-nomenclato-test --ca-certificate-identifier rds-ca-rsa2048-g1 --apply-immediately
If the upgrade uses a default db parameter group that changes 'rds.force_ssl' from 0 to 1 then it can be reverted this way:
Create a new custom db parameter group based in the target PostgreSQL version (eg: 16)
Modify the value of the parameter rds.force_ssl from 1 to 0
In the database configuration tab, change the db parameter group to the just created one and wait until changes are applied (it takes about 4 minutes)
Reboot the database
Service start, status & stop
·Using Systemd
sudo systemctl start postgresql.service
·Using SystemV
sudo service postgresql start
Switching Over to the postgres Account
$ sudo -i -u postgres
$ psql
postgres=# \q
postgres=# commands
-List databases
\list
-Create database 'mydb'
postgres=# create database mydb;
CREATE DATABASE
postgres=# create user mydb with encrypted password 'mydb';
CREATE ROLE
postgres=# grant all privileges on database mydb to mydb;
GRANT
-Delete database 'mydb'
UPDATE pg_database SET datallowconn = 'false' WHERE datname = 'mydb';
SELECT pg_terminate_backend(pg_stat_activity.pid)
FROM pg_stat_activity
WHERE pg_stat_activity.datname = 'mydb';
DROP DATABASE mydb;
Exportació a .csv, incloent escapament de cometes ("), from a bash script:
echo $(PGPASSWORD=mypassword psql -h localhost -p 5432 -U myuser -d mydb -c "\COPY (select * from my_table) TO 'myfile.csv' HEADER CSV;")
Exportació a .csv, from a bash script taking the \COPY command from an external file:
file_psql_command='psqlcopyfiletest.sql'
echo $(PGPASSWORD=gpradoc psql -h localhost -p 5432 -U gpradoc -d gpradoc -f <(tr -d '\n' < $file_psql_command ) 2>&1)
and the external file w/ the \COPY command:
\COPY (
SELECT INSTITUTION_CODE,
LEARNING_UNIT_CODE,
NUMBER_OF_STUDENTS
FROM LEARNING_UNIT
) TO stdout HEADER CSV;
It's important to keep the sequences, the following command does it.
pg_dump -h dbhost -U dbuser -f ./dbname-env.fc -F c dbname
Eg dump:
pg_dump -h cluster-dbname-env.cluster-clzcuurwraqe.eu-west-1.rds.amazonaws.com -p 5432 --username dbuser --password -F c --file ./dbname-env.fc dbname
Eg restore:
psql -h cluster-dbname-env.cluster-clzcuurwraqe.eu-west-1.rds.amazonaws.com -p 5432 --username dbuser --password --file ./dbname-env.fc
If the destination database isn't new, it can be emptied dropping all tables and sequences. Replace <MY_TC> by the proper catalog (usually the dbname):
Drop all tables:
SELECT 'DROP TABLE ' || table_name ||';'
FROM information_schema.tables
WHERE table_type = 'BASE TABLE' AND table_catalog='MY_TC' and table_schema='public';
Drop all sequences:
SELECT 'drop sequence if exists "' || relname || '" cascade;' FROM pg_class WHERE relkind = 'S';
Warning: View 'information_schema.sequences' doesn't have all sequences (it might have, eg, the 'hibernate_sequence' but not other sequences)
(a) See active sessions (query executing time, etc.)
SELECT pid, age(clock_timestamp(), query_start), usename, query FROM pg_stat_activity WHERE query != '<IDLE>' AND query NOT ILIKE '%pg_stat_activity%' ORDER BY query_start desc;
(b) Kill detected conflict session
SELECT pg_terminate_backend(pid);
where 'pid' is the session to be killed
[https://www.eversql.com/sql-query-optimizer/] online suggests creation of indexes for SQL queries. It supports multiple databases.
Note: it requires a company e-mail address, eg: @cou.edu. Login isn't possible w/ a personal e-mail address as @gmail.com
The "Free" account only allows 1 SQL optimization (not repeatable)
Sample 1:
(a) Choose "Optimize a single query"
select *
from mytable
where f1='a' and f2='b'
order by f3
(b) Optional, upload the database metadata
It seems a feature for subscribers only.
(c) See the index creation suggestion
CREATE INDEX mytable_idx_a_b_c ON mytable (a,b,c);
Sample 2:
Query
SELECT
p.id,
(SELECT
count(DISTINCT c.id)
FROM
comments AS c
WHERE
c.PostId = p.id LIMIT 1)
FROM
posts AS p
WHERE
p.AnswerCount > 3
AND p.title LIKE '%optimized%'
AND p.CreationDate >= '2017-01-01 00:00:00'
ORDER BY
p.CreationDate LIMIT 100
Suggestion
ALTER TABLE `comments` ADD INDEX `comments_idx_postid_id` (`PostId`,`id`);
ALTER TABLE `posts` ADD INDEX `posts_idx_answercoun_creationda_id` (`AnswerCount`,`CreationDate`,`id`);
ALTER TABLE `posts` ADD INDEX `posts_idx_creationdate` (`CreationDate`);
Eg: dagger
Reference "Embedded PostgreSQL for Spring Boot Tests" (2025-06-05):
https://www.baeldung.com/spring-boot-embed-postgresql-testing
Reference "Zonky Embedded Database" (see 'Quick Start'):
https://github.com/zonkyio/embedded-database-spring-test
Easy integration: Add the maven dependency and declare the @AutoConfigureEmbeddedDatabase annotation on a test class.
pom.xml
<dependency>
<groupId>org.postgresql</groupId>
<artifactId>postgresql</artifactId>
<version>42.2.18</version>
</dependency>
<dependency>
<groupId>io.zonky.test</groupId>
<artifactId>embedded-database-spring-test</artifactId>
<version>2.6.0</version>
<scope>test</scope>
</dependency>