Versão avaliada: Debian 11 + Docker-CE
Fazer a instalação padrão do Docker
Dependências para o funcionamento do Swarm
# apt install -y curl apt-transport-https software-properties-common ca-certificates
O ambiente ideal para trabalhar com cluster é com no mínimo três nós servidores.
Para uma melhor identificação o hostname de cada servidor usarei NODE01, NODE02, ....
Hostname IP
NODE01 192.168.1.111 Manager
NODE02 192.168.1.112 Worker
NODE03 192.168.1.113 Worker
O Cluster é definindo nó Master
$ docker swarm init --advertise-addr 192.168.1.111:2377 <- IP do Manager
Swarm initialized: current node (j95qbfdyf80gs64c5zaq1ynh4) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join --token SWMTKN-1-2p3h66....xibmh6f <IP_MANAGER>:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
Para adicionar os Workers no Manager NODE01 colar tokem gerado no NODE01 anteriormente:
$ docker swarm join --token SWMTKN-1-2p3h66....xibmh6f <IP_MANAGER>:2377
This node joined a swarm as a worker.
Criar redundância de Managers, no NODE01 execute
$ docker swarm join-token manager
To add a manager to this swarm, run the following command:
docker swarm join --token SWMTKN-1-2p3h66...a3rcpjq <IP_MANAGER>:2377
Com o comando gerado execute em outro servidor que ainda não recebeu a configuração de worker.
$ docker swarm join --token SWMTKN-1-2p3h66...a3rcpjq <IP_MANAGER>:2377
This node joined a swarm as a manager.
Em qualquer nó Manager execute o comando abaixo para listar os nós do cluster.
$ docker node ls
Os manager compartilham suas informações usando o algoritimo Raft para realizar descobertas e gerenciamentos de atividade. No link existe uma demonstração de funcionamento.
No Manager execute o comando abaixo para listar os integrantes do cluster.
$ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
zew34ux3rp0yecxxjlpla1494 * NODE01 Ready Active Leader 20.10.3
n3rh778ni4edojh3z9gwwx995 NODE02 Ready Active 20.10.3
ujz39dmpuob7efdz05u97ixol NODE03 Ready Active 20.10.3
Execute
$ docker node ls <- Para descobrir o ID para excluir
$ docker node demote <ID>
$ docker node rm <ID>
Ou
$ docker node rm --force NODE01
Apos remover todos workers utilize o comando abaixo para remover o último NODE Manager do Swarm:
$ docker swarm leave
$ docker swarm leave --force
A partir de agora é possível disponibilizar o Portainer dentro do cluster Swarm.
Inicie disponibilizando o Portainer em um Manager, crie o script abaixo para fazer o deploy do Portainer.
# vi portainer-agent-stack.yml
version: '3.9' <- Ajustar conforme versao do Docker
services:
agent:
image: portainer/agent
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /var/lib/docker/volumes:/var/lib/docker/volumes
networks:
- agent_network
deploy:
mode: global
placement:
constraints: [node.platform.os == linux]
portainer:
image: portainer/portainer-ce
command: -H tcp://tasks.agent:9001 --tlsskipverify
ports:
- "9000:9000"
- "8000:8000"
volumes:
- <PATH_portainer>:/data <- Ajustar para persistência
networks:
- agent_network
deploy:
mode: replicated
replicas: 1
placement:
constraints: [node.role == manager]
networks:
agent_network:
driver: overlay
attachable: true
volumes:
portainer_data:
external: false <- Opcional
# docker stack deploy -c portainer-agent-stack.yml portainer
# docker service ls
Remover um serviço
# docker service rm portainer_agent portainer_portainer
# vi deploy_portainer_service
#!/usr/bin/env bash
PORTAINER="/home/CONTAINER/portainer"
DOCKER="/usr/bin/docker"
MKDIR="/usr/bin/mkdir"
if [[ -d $PORTAINER ]]; then
echo "Diretorios existentes."
else
$MKDIR -p $PORTAINER
echo "Diretorios criados."
fi
### REMOVE
$DOCKER service rm Portainer && \
$DOCKER service rm Portainer_Agent && \
$DOCKER network rm portainer_agent_network
sleep 2
$DOCKER system prune --all -f
sleep 2
### INSTALL
$DOCKER network create --driver overlay --attachable portainer_agent_network
$DOCKER service create \
--name Portainer_Agent \
--env TZ='America/Sao_Paulo' \
--network portainer_agent_network \
--mode global \
--constraint 'node.platform.os == linux' \
--mount type=bind,src=//var/run/docker.sock,dst=/var/run/docker.sock \
--mount type=bind,src=//var/lib/docker/volumes,dst=/var/lib/docker/volumes \
portainer/agent
$DOCKER service create \
--name Portainer \
--env TZ='America/Sao_Paulo' \
--network portainer_agent_network \
--publish 9000:9000 \
--publish 8000:8000 \
--mount type=bind,src=$PORTAINER,dst=/data \
--replicas=1 \
--constraint 'node.role == manager' \
portainer/portainer-ce -H "tcp://tasks.portainer_agent:9001" --tlsskipverify
$DOCKER swarm update --task-history-limit 0
# UPDATE
#$DOCKER service update Portainer_Agent
#$DOCKER service update Portainer
Link: 1 / 2
Remover um service
# docker service rm <Nome_Container>
Remover service que esta em modo shutdown ou failed
# docker swarm update --task-history-limit 0
# docker service update <Nome_Container>
# vi deploy_haproxy_service
#!/usr/bin/env bash
HAPROXY="/home/CONTAINER/haproxy"
CERTS="/home/CONTAINER/haproxy/certs"
DOCKER="/usr/bin/docker"
MKDIR="/usr/bin/mkdir"
if [[ -d $HAPROXY && -d $CERTS ]]; then
echo "Diretorios existente."
else
$MKDIR -p $HAPROXY $CERTS
echo "Diretorios criado."
fi
$DOCKER service rm HAProxy
sleep 2
$DOCKER system prune --all -f
sleep 2
$DOCKER service create \
--name HAProxy \
--publish 80:80 --publish 443:443 --publish 1936:1936 \
--env TZ='America/Sao_Paulo' \
--mount type=bind,src=$HAPROXY,dst=/usr/local/etc/haproxy:ro \
--mount type=bind,src=$CERTS,dst=/etc/ssl/private:ro \
--replicas=1 \
--restart-condition any \
haproxytech/haproxy-debian:latest
ABAIXO EM DESENVOLVIMENTO
* Caso for utilizar o Portainer como gerenciador de containers é preciso que em cada nó definido como Mananger ter o container portainer-agent, conforme orientação no link.
$ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
lsk2vwd6u1e0w3eniz5v3oowr * NODE001 Ready Active Reachable 19.03.2
srm4ccd4uzz1z70ijiexhbt1w NODE002 Ready Active 19.03.2
qymkq51ymxbfyd4aol15h4kt7 NODE003 Ready Active Leader 19.03.2
$ curl -L https://downloads.portainer.io/portainer-agent-stack.yml -o portainer-agent-stack.yml
$ docker stack deploy --compose-file=portainer-agent-stack.yml PORTAINER
* Criando serviço para operar sobre o cluster
$ docker service create --name webservice1 --network ClusterNet --replicas 3 -p 5001:80 francois/apache-hostname
$ docker service create --name HELLO-WORLD1 --network ClusterNet --replicas 3 -p 9010:80 dockercloud/hello-world
$ docker service ls
$ docker ps <- Execute nos três nós
* Criar script ...
* Realizando escalação das instancias do container
$ docker service scale webservice1=10
$ docker service ps
$ docker ps <- Execute nos três nós
$ docker service scale webservice1=6
$ docker service ps
$ docker stack deploy -c docker-compose-PORTAINER-9000.yml PORTAINER
$ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
wvr64d6safoy PORTAINER_portainer replicated 0/3 portainer/portainer:latest *:9000->9000/tcp
$ docker service scale PORTAINER_portainer=3
PORTAINER_portainer scaled to 3
overall progress: 0 out of 3 tasks
1/3: preparing [=================================> ]
2/3: preparing [=================================> ]
3/3: preparing [=================================> ]
$ docker service ps PORTAINER_portainer
$ docker stack rm PORTAINER_portainer
$ docker system prune --all <- apaga todos os containers e imagens
* Dependências
https://docs.docker.com/engine/swarm/how-swarm-mode-works/services/#replicated-and-global-services
$ docker stack deploy -c docker-compose-PORTAINER-9000.yml PORTAINER
$ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
wvr64d6safoy PORTAINER_portainer replicated 0/3 portainer/portainer:latest *:9000->9000/tcp
$ docker service scale PORTAINER_portainer=3
PORTAINER_portainer scaled to 3
overall progress: 0 out of 3 tasks
1/3: preparing [=================================> ]
2/3: preparing [=================================> ]
3/3: preparing [=================================> ]
$ docker service ps PORTAINER_portainer
$ docker stack rm PORTAINER_portainer
$ docker system prune --all <- apaga todos os containers e imagens
https://blog.geekhunter.com.br/orquestracao-de-conteineres-docker-swarm-portainer/
https://www.katacoda.com/portainer/scenarios/deploying-to-swarm
https://dockerswarm.rocks/portainer/
https://bestestredteam.com/2018/09/12/docker-swarm-and-portainer-deployment/
https://itnext.io/administering-two-or-more-docker-swarm-clusters-with-portainerio-682d01a92b25
Here is the compose file that I am using when starting the container.
version: '3.3' services: dokuwiki: image: 'bitnami/dokuwiki:latest' ports: - target: 80 published: 80 mode: host - target: 443 published: 443 mode: host volumes: - type: volume source: "dokuwiki" target: "/bitnami" deploy: mode: global endpoint_mode: dnsrr update_config: parallelism: 1 failure_action: rollback delay: 30s restart_policy: condition: any delay: 5s window: 120s placement: constraints: - node.hostname==SWARM_MASTER_HOSTNAME_HERE volumes: dokuwiki: driver: local driver_opts: type: "nfs" o: addr=IP_ADDR_HERE,nolock,soft,rw device: ":/nfsshare/docker"
Here are the options that I currently have on my NFS server export
rw,sync,no_root_squash,anonuid=1000,anongid=1000,no_acl
I am sure that some of the settings here are wrong there, but I am trying to throw everything at it I can to try and get something to stick. Please let me know what else I need to share to help with troubleshooting.
Thanks
version: '3.3'
services:
portainer:
image: portainer
ports:
- 9000:9000
networks:
- interna
volumes:
-
-
deploy:
mode: replicated
replicas: 1
networks:
interna:
$ docker stack deploy -c compose-file.yml PORTAINER
$ docker service ps
$ docker service ps
$ docker node update --label-add "az=sa-east-1"
$ docker service create --placement-pref "spread=node.label.az" demo
$ docker swarm ca --help <- atualização de chave padrao de 90 dias
$ docker swarm init --autolock <- trava para os comandos administrativos
$ docker swarm update --autolock
$ docker service update --update-order --update-delay
$ docker service logs --tail 10 demo | sort -k3 -k4
https://www.youtube.com/watch?v=fH_yuV2bm9E
https://www.concrete.com.br/2018/02/06/vamos-conhecer-o-docker-swarm/
https://platform9.com/blog/kubernetes-docker-swarm-compared/
https://dev.to/totalcloudio/docker-swarm-vs-kubernetes--what-you-really-need-to-know--4kjb
https://medium.com/faun/kubernetes-vs-docker-swarm-whos-the-bigger-and-better-53bbe76b9d11
https://docs.docker.com/engine/swarm/admin_guide/
* Definindo a rede para o serviço cluster
$ docker network create -d overlay --subnet <IP_REDE_DIFERENTE>/24 ClusterNet <- Execute nos três nós
$ docker network ls
- Nota: Executar o comando docker network ls nos outros hosts e percebera que a rede foi replicada para todos os participantes do cluster
1 / 2 / 3 / 4
Acesse App Templates e disponibilize o Portainer Agent para fazer a gestão dos recursos do cluster. <- Versões abaixo de 2.0
Para versões acima da 2.0 disponibilize o Portainer Agent via Stack. Link.
Para analisar: Após usar o script do Portainer Agent o menu App Template apareceu ou existe um timeout para carregar na versão CE.
https://docs.docker.com/engine/swarm/how-swarm-mode-works/services/#replicated-and-global-services
https://blog.geekhunter.com.br/orquestracao-de-conteineres-docker-swarm-portainer/
https://www.katacoda.com/portainer/scenarios/deploying-to-swarm
https://dockerswarm.rocks/portainer/
https://bestestredteam.com/2018/09/12/docker-swarm-and-portainer-deployment/
https://itnext.io/administering-two-or-more-docker-swarm-clusters-with-portainerio-682d01a92b25
Here is the compose file that I am using when starting the container.
version: '3.3' services: dokuwiki: image: 'bitnami/dokuwiki:latest' ports: - target: 80 published: 80 mode: host - target: 443 published: 443 mode: host volumes: - type: volume source: "dokuwiki" target: "/bitnami" deploy: mode: global endpoint_mode: dnsrr update_config: parallelism: 1 failure_action: rollback delay: 30s restart_policy: condition: any delay: 5s window: 120s placement: constraints: - node.hostname==SWARM_MASTER_HOSTNAME_HERE volumes: dokuwiki: driver: local driver_opts: type: "nfs" o: addr=IP_ADDR_HERE,nolock,soft,rw device: ":/nfsshare/docker"
Here are the options that I currently have on my NFS server export
rw,sync,no_root_squash,anonuid=1000,anongid=1000,no_acl
I am sure that some of the settings here are wrong there, but I am trying to throw everything at it I can to try and get something to stick. Please let me know what else I need to share to help with troubleshooting.
Thanks
version: '3.3'
services:
portainer:
image: portainer
ports:
- 9000:9000
networks:
- interna
volumes:
-
-
deploy:
mode: replicated
replicas: 1
networks:
interna:
$ docker stack deploy -c compose-file.yml PORTAINER
$ docker service ps
$ docker service ps
$ docker node update --label-add "az=sa-east-1"
$ docker service create --placement-pref "spread=node.label.az" demo
$ docker swarm ca --help <- atualização de chave padrao de 90 dias
$ docker swarm init --autolock <- trava para os comandos administrativos
$ docker swarm update --autolock
$ docker service update --update-order --update-delay
$ docker service logs --tail 10 demo | sort -k3 -k4
https://www.youtube.com/watch?v=fH_yuV2bm9E
https://www.concrete.com.br/2018/02/06/vamos-conhecer-o-docker-swarm/
https://platform9.com/blog/kubernetes-docker-swarm-compared/
https://dev.to/totalcloudio/docker-swarm-vs-kubernetes--what-you-really-need-to-know--4kjb
https://medium.com/faun/kubernetes-vs-docker-swarm-whos-the-bigger-and-better-53bbe76b9d11
https://docs.docker.com/engine/swarm/admin_guide/
* Definindo a rede para o serviço cluster
$ docker network create -d overlay --subnet <IP_REDE_DIFERENTE>/24 ClusterNet <- Execute nos três nós
$ docker network ls
- Nota: Executar o comando docker network ls nos outros hosts e percebera que a rede foi replicada para todos os participantes do cluster
1 / 2 / 3 / 4
* Estrutura de diretório e arquivos em STORAGE