Start / Stop / Restart service on Minion
salt 'target' service.start "service name" (start/stop/restart)Restart Minion on Win target
salt 'target' cmd.run 'start powershell "Restart-Service -Name salt-minion"'Restart Minion on Linux target
salt 'target' cmd.run 'service salt-minion restart"'Execute a script remotely
salt target cmd.exec_code python 'import sys; print sys.version'2.7.8 GCC 4.9.1Check service on minion
salt target service.status httpdCheck if service is available
salt target service.available httpdget all services
salt target service.get_allreload a service config (avoids restart)
salt target service.reload httpdstart | stop | restart a service
salt target service.start httpdrun command in background (ie nohup cmd &)
salt target cmd.run_bg "iperf3 -s"run command as another user
salt target cmd.run 'mycmd' runas=jsmithby OS grain
salt -G os:Windows cmd.run "net stop Firewall"
by other grains
salt –G 'server_type:app and env:prod' state.highstatetarget EC2 instances only
salt -G uuid:ec2\* test.pingcompound match
salt –C 'server_type:web and clo*' state.sls nginxtarget by multiple grain values
salt -C 'G@environment:prod and G@component:accounts' test.pingNodegroup match
salt –N ny_db_servers cmd.run 'ps –ef | grep mysql'regex OR
salt -E "(nyweb|db5)" test.pingby pillar value
salt -I 'role:webserver' test.pingAdd Minions to Master
salt-key -L (show pending to be accepted)
salt-key -A (accept all pending)
salt-key -a target (accept by hostname)
Remove inactive minions from Salt
salt-run manage.down removekeys=True
Remove minions by name
salt-key -D targetName
Test Connection
salt 'target' test.ping
Diagnostics
salt target status.all_status // gets all info
status.cpu_info
status.cpustats
status.uptime
status.diskusage // or disk.usage
status.loadavg
status.meminfo
status.netdev // network device
status.netstats //network stats
status.procs
status.version //system version
status.vmstats //virtual mem stats
status.w //who is logged in
Show Minions by State (Up/Down)
salt-run manage.up
salt-run manage.down
salt-run manage.status (show all by status)
show available file roots
on master
salt-run config.get file_rootson minion
salt-call config.get file_rootsCompliance and Audit
to get a compliance result, run a State check with test=True
salt \* state.highstate test=True
This will return any differences from existing configuration to whats in the Top file
Show Salt Master version
salt --versions-report
Show Salt Minion version
salt-call --versions-report
Start Minion in Debug mode
salt-master --log-level=debug
Restart everything on Master:
pkill salt-minion //Kill minion
pkill salt-syndic // Kill Syndic
salt-run cache.clear_all //Clear all cache
salt '*' saltutil.sync_grains //Sync grains
salt-master -d //Start master daemon
salt-minion -d //Start minion daemon
salt-syndic -d //Start syndic daemon
Agent Env Info
show all information about a minion (lots of data)
salt minion status.all_status
show memory
salt minion status.meminfo
show disk usage
salt minion status.diskusage
show who is logged in
salt minion status.w
Show Grain data
salt '*' grains.ls
salt '*' grains.items
get specific Grain
salt cent7 grains.get selinux
cent7:
----------
enabled:
True
enforced:
Enforcing
get multiple grains
salt cent7 grains.item selinux serialnumber zmqversion
set a Grain data on a node
salt cent7 grains.set 'apps:Myapp:port' 2500
salt cent7 grains.item apps
cent7:
----------
apps:
----------
Myapp:
----------
port:
2500
All grain data is stored on the minion in /etc/salt/grains file
if adding more data manually, refresh Grains on the Master to pick up changes
salt target saltutil.refresh_modules
Use grain in a state file
apache:
pkg.installed:
{% if grains['os'] == 'RedHat' %}
- name: httpd
show JSON output
salt target grains.item ipv4 --out=json
{
"target": {
"ipv4": [
"10.0.2.15",
"127.0.0.1",
"192.168.56.102"
]
}
}
Use grain as a variable
{% set nodename = grains['nodename'] %}
base:
'*':
- common
- packages
- users
- servers.{{ nodename }}
show mine data
salt \* mine.get \* x509.get_pem_entries
Verbose output (timeout 300 sec)
salt 'target' state.hightstate -t 300 -v
Show package version
salt 'target' pkg.version apache
install package on minions
salt 'target' pkg.install apache
Uninstall pkg
salt 'target' pkg.remove 'npp'
salt 'target' pkg.purge 'npp'
Show Installed Packages or Software
salt 'target' pkg.list_pkgs
show all packages that need updates
salt target pkg.list_upgrades
upgrade all packages
salt target pkg.upgrade
Windows (Chocolatey)
install chocolatey
salt wintarget chocolatey.bootstrap
install pkg using choco
salt mrxwin7 chocolatey.install 7zip
Show all Salt jobs run history
salt-run jobs.list_jobs
Show active Salt jobs
salt-run jobs.active // returns a Job ID
Show currently running processes on a minion
salt '*' saltutil.running
Kill active job
salt 'target' saltutil.kill_job $JOB_ID
salt '*' saltutil.term_job <job id>
Clear Job cache
salt '*' saltutil.clear_cache
examples of reactor matching
/etc/salt/master.d/reactor.conf
reactor:
- 'sayhello':
- /srv/reactor/test.sls
/srv/reactor/test.sls
{% if data['id'].startswith('web') %}
sayhello:
local.state.apply:
- tgt: {{ data['id'] }}
- arg:
- say-hello
local.cmd.run:
- tgt: minion1
- arg:
- "echo 'hello' > /tmp hello"
{% endif %}
you can kick of this Reaction via an Event
minion> salt-call event.send "sayhello" "{ name: Joe, age: 23 }"Run highstate in debug
salt-call -l debug state.highstate
Run specific state in debug
salt-call -l debug state.sls elasticsearch
show highstate process (debug YAML syntax errors)
salt-call state.show_highstate
call a highstate, show only changes, timeout=10min
salt-call state.highstate test=true --state-output=changes -t 600
show specific State details
salt 'target' state.show_sls apache
show only Changed and Failed during run
modify /etc/salt/master and /etc/salt/minion, restart Master after change
state_verbose: True
state_output: mixed
start minion in debug, see connection errors
salt-minion -l debug
https://docs.saltstack.com/en/latest/topics/troubleshooting/minion.html
if Master not seeing Minion key requests, add IPTables rules to Master,
root@master# iptables -I INPUT -s 172.31.23.0/24 -p tcp -m multiport --dports 4505,4506 -j ACCEPT
root@master# iptables -I INPUT -s 172.31.25.0/24 -p tcp -m multiport --dports 4505,4506 -j ACCEPT
# reject everything else,
root@master# iptables -A INPUT -p tcp -m multiport --dports 4505,4506 -j REJECT
Log Jinja variables to Minion
{% do salt.log.error('testing jinja logging') -%}
show Options passed to a State (ie, test=true)
{% do salt.log.error(opts['test']) -%}
Output variables from State file,
show_var:
- test.show_notification:
- text: This is my var {{ var }}
Exit w failure message
fail_run:
test.fail_without_changes:
- name: your message here
store credentials in SDB, pass them to pillar in encrypted form, each minion can use credentials from pillars to run formulas
create credential file on salt-master
vim /etc/salt/.cred.yaml
---
ec2_login:
id: abc123
key: xyz444
region:
us-west: somedata
chown root:root /etc/salt/.cred.yaml && chmod 600 /etc/salt/.cred.yaml
configure /etc/salt/master.d/creds.conf to use SDB yaml file
saltcred:
driver: yaml
files: ["/etc/salt/.cred.yaml"]
restart salt-master
access secrets from cmd line
salt-run sdb.get sdb://saltcred/ec2_login:region:us-west
>> somedata
access secrets from pillar file
ec2cred: {{ salt['sdb.get']('sdb://saltcred/ec2_login:region:us-west')
cat minion.d/_schedule.conf
schedule: __mine_interval: {enabled: true, function: mine.update, jid_include: true, maxrunning: 2, minutes: 60, return_job: false, run_on_start: true} job1: {enabled: true, function: test.ping, jid_include: true, maxrunning: 1, name: job1, run: true, seconds: 30} job2: args: [date >> /tmp/date] enabled: true function: cmd.run jid_include: true maxrunning: 1 name: job2 seconds: 10minion needs to be restarted to pick up schedule
install salt on host
cd /opt
python3 -m venv salt
cd /opt/salt
./bin/pip install salt
ln -s /opt/salt/bin/salt-call /usr/bin/salt-call
create minion config file
vim /etc/salt/minionadd contents
copy Saltstack repo to the host, place in /srv/saltstack
make sure salt-minion service is DISABLED and STOPPED
test salt-call state apply
salt-call test.pingsalt-call state.apply formula.myappSlots - get result of salt cmd execution into a state file
https://saltstack.github.io/docs-saltproject-io/en/latest/ref/modules/all/salt.modules.data.html
persistent in-memory data
Modules List
https://docs.saltstack.com/en/latest/salt-modindex.html
State List
https://docs.saltstack.com/en/latest/ref/states/all/index.html#all-salt-states
Salt source files
/usr/lib/python<version>/site-packages/salt
Bootstrap Install
https://repo.saltstack.com/#bootstrap
wget -O bootstrap_salt.sh https://bootstrap.saltstack.com
install Master
sh bootstrap_salt.sh -M
install specific Salt version
sh bootstrap_salt.sh git v2015.8.8
Check if file contains a string (true/false)
salt '*' file.contains /etc/ssh/sshd_config 'Port'
Check if a file on the Salt minions contains a certain regex (search file on minions):
salt "*" file.contains_regex /etc/resolv.conf "timeout.4"
check if file is a file or directory
salt target file.stats /etc/hosts
Find a file
salt '*' file.find /etc name=host\*.\*
result
- /etc/host.conf
- /etc/hosts.allow
- /etc/hosts.deny'
copy small file ( > 100kb) from Master to minion
salt-cp 'target' /opt/file (source) /opt (destination)
copy dir from Master /srv/salt area to minion
salt 'target' cp.get_dir salt://myDir /target/dir
copy large file from Master /srv/salt/distribution folder to minion
salt 'target' cp.get_file salt://distribution/myFile.tar /tmp/myFile.tar
copy file from one minion to another using MinionFS (only works after Salt 2016.3.2)
https://docs.saltstack.com/en/latest/topics/tutorials/minionfs.html
add MinionFS to master conf file
vi /etc/salt/master
add this lines:
minionfs_mountpoint: salt://minionfs
file_recv: True
restart Master
get the file from the minion and store it in MinionFS
salt 'target' cp.push /path/to/file/or/dir/on/minion
files are stored on Master in here:
/var/cache/salt/master/minions/<minion>
copy file from Minion to Master
on Master, set "file_recv: True" in /etc/salt/master, restart Master
to copy file,
salt \* cp.push /path/to/file/on/minion
all files are stored on Master /var/cache/salt/master/minions/<minion>/files
add host entry to a minion
salt target hosts.add_host 192.168.55.100 hostname
Replace contents of file with new value
salt '*' file.sed /etc/ssh/sshd_config 'Port 22' 'Port 2201'
Create folder
salt '*' file.makedirs /tmp/testFolder/ (for windows use native Win syntax, ie C:/temp/dir)
Delete folder
salt '*' file.remove /tmp/testFolder
Create new file
salt '*' file.touch /tmp/testFolder/emptyFile
manage file content in State file
/etc/fstab:
file.line:
- content: "proc /proc"
- mode: insert
- after
replace contents of file
update_modprobe:
file.replace:
- name: /etc/modprobe.d/salt_cis.conf
- pattern: "^install {{ fs }} /bin/true"
- repl: install {{ fs }} /bin/true
- append_if_not_found: True
create symlink
symlink: file.symlink: - name: /path/to/A - target: /symlink/path/A
create directory
/home/qb/q3:
file.directory:
- user: qb
- group: qb
- dir_mode: 755
- file_mode: 755
- require:
- user: qb
replace file contents using Augeas
sshd_config:
augeas.change:
- context: /files/etc/ssh/sshd_config
- changes:
- set Port 8888
- set PasswordAuthentication yes
- set PubkeyAuthentication yes
copy directory from Master to minion
app_ta_nix_dir:
file.recurse:
- name: /opt/splunkforwarder/etc/apps/Splunk_TA_nix
- source: salt://{{ slspath }}/files/apps/Splunk_TA_nix
- makedirs: True
- user: splunk
- group: splunk
- file_mode: 0755
File Managed
try several files if 1st one doesnt exist
monit_config:
file.managed:
- name: /etc/monit/monit.conf
- source:
- salt://{{ slspath }}/files/configs/host/{{ grains.id }}.j2
- salt://{{ slspath }}/files/configs/profile/{{ salt['pillar.get']('profile') }}.j2
- salt://{{ slspath }}/files/configs/default.j2
- template: jinja
- makedirs: True
- mode: 0600
- user: monit
- group: monit
Set user's password to 123123
salt '*' shadow.set_password user02
'$6$EYk3o52W$DaSUIfHpYMBkSShFYXdODyrHbmQlCNKFghNl9FZzZshUn240GCOn5szQ3piyBMtt/x4m.'
Generate a password
salt 'target' shadow.gen_password myP@ssword
$6$nTul6WP1$EJ6THWEYKgOuGjqSEhnv8ZcYET6z/sDsSB.YBoyImRWEoDjguvcUahnY3UuNtNpECVhwxsjWI6ucvCc1
Additional ways to generate you own password:
python -c "import crypt, getpass, pwd; print crypt.crypt('yourpassword', '\$6\$SALTsalt\$')"
openssl passwd -1
Add User
salt target user.add Joe
Remove User
salt '*' user.delete Joe remove=True force=True
Show all users on a target
salt target user.list_users
Info on all users on a target
salt target user.getent
Info on specific user
salt target user.info Joe
Add User to Group
salt target user.chgroups Joe Administrator, LocalAdmin True
// or
salt target group.adduser admins Joe
Remove User from Group
salt target group.deluser admins Joe
Show users Groups
salt target user.list_groups Joe
Change Users Shell
salt '*' user.chshell user02 /bin/bash
get info on all groups
salt target group.getent
get info o a particular group
salt target group.info splunk
Delete group
salt target group.delete splunk
Highstate
salt '*' state.highstate
Deploy specific state
salt '*' state.apply webserver
Run multiple state executions on same minion at once (by design Salt limits 1 state per minion at once)
salt target state.sls yourState concurrent=True
show all states that are applied to a minion
salt minion1 state.show_lowstate --out json | jq -r '.[][].state' | sort -uhttps://docs.saltstack.com/en/latest/ref/states/requisites.html
unless
vim:
pkg.installed:
- unless:
- rpm -q vim-enhanced
- ls /usr/bin/vim
onlyif
set_RTC:
cmd.run:
- name: "/usr/bin/timedatectl set-local-rtc 0"
- onlyif: "/usr/bin/timedatectl | grep "RTC in local TZ" | grep yes"
require
bar:
pkg.installed:
- require:
- sls: foo
onchanges
extract_package:
archive.extracted:
- name: /usr/local/share/myapp
- source: /usr/local/share/myapp.tar.xz
- archive_format: tar
- onchanges:
- file: Deploy server package
watch
ntpd:
service.running:
- watch:
- file: /etc/ntp.conf
prereq
prereq allows for actions to be taken based on the expected results of a state that has not yet been executed. The state containing the prereq requisite is defined as the pre-requiring state. The state specified in the prereq statement is defined as the pre-required state.
graceful-down:
cmd.run:
- name: service apache graceful
- prereq:
- file: site-code
use
The use requisite is used to inherit the arguments passed in another id declaration. This is useful when many files need to have the same defaults.
/etc/foo.conf:
file.managed:
- source: salt://foo.conf
- template: jinja
- mkdirs: True
- user: apache
- group: apache
- mode: 755
/etc/bar.conf:
file.managed:
- source: salt://bar.conf
- use:
- file: /etc/foo.conf
require_in
vim:
pkg.installed:
- require_in:
- file: /etc/vimrc
import YAML data into a state (same with import_json, import_text)
{% import_yaml 'formula/facl/files/configs/test.yaml' as myconfig %}
show_config:
- test.show_notification:
- text: {{ myconfig }}
edit the Roster file, include node's IP address and user to use (do this on Master)
vi /etc/salt/roster
# Sample salt-ssh config file
mrxcloud1:
host: 104.131.102.230 # The IP addr or DNS hostname
user: fred # Remote executions will be executed as user fred
passwd: foobarbaz # The password to use for login, if omitted, keys are used
sudo: True # Whether to sudo to root, not enabled by default
Run command on node (salt-ssh -i)
salt-ssh -i mrxcloud1 cmd.run "uname -a"
To run SSH as another user (non-root)
copy /etc/salt directory to /home/user and change perms to user:group
should look like,
/home/user/salt/master
/home/user/salt/minion
/home/user/salt/minion.d
/home/user/salt/minion_uid
/home/user/salt/pki
/home/user/salt/var
/home/user/salt/roster
configure Roster file: /home/joe.shmo/salt/roster
edit /home/joe.shmo/salt/master
change the following paths
pidfile: /home/user/salt/var/run/salt-master.pid
log_file: /home/user/salt/var/log/salt/master
pki_dir: /home/user/salt/pki/master
cachedir: /home/user/salt/var/cache/salt/master
Add to the Roster file, the path to the user's private key
machine1:
host: machine1.company.com
user: joe.shmo
sudo: True
priv: /home/joe.shmo/.ssh/id_rsa
run command with -c option
salt-ssh -c ~/salt -i machine1 cmd.run "/sbin/service jira status"
apply a state file
salt-ssh '*' state.apply network
Salt Roster File - Ansible-syntax (configure Roster to have variables and Groups)
Sample Pillar structure to create system users pillar structure
Salt top file calls the Users state
/srv/salt/top.sls
base:
'*':
- common
- users
Pillar top file tells what pillars to load for what nodes
/srv/pillar/top.sls
base:
'*':
- users
Users Pillar contains actual data for users
/srv/pillar/users.sls
users:
spiderman:
uid: 1280
fullname: 'spider man'
shell: /bin/bash
ssh-keys:
- ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAt1IFQP9xxx
black.hood:
uid: 1281
fullname: black hood
shell: '/bin/bash'
ssh-keys:
- ssh-rsa AADRN34zf12fdfd343434wAAAQEAwAAAQEA
supergirl:
uid: 1282
fullname: super girl!
shell: '/bin/bash'
ssh-keys:
- ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNWRiUmFXjxrp4V
- ssh-rsa ABCDEF134343434343dfdfdf343111dgfdfdfdf
Salt Users state parses array and creates users,
/srv/salt/users.sls
{% for user, args in pillar.get('users',{}).iteritems() %}
{{ user }}:
group.present:
- gid: {{ args['uid'] }}
user.present:
- fullname: {{ args['fullname'] }}
- uid: {{ args['uid'] }}
- gid: {{ args['uid'] }}
- shell: {{ args['shell'] }}
- home: /home/{{ user }}
{% endfor %}
Refresh pillars on all nodes
salt \* saltutil.refresh_pillarLook at pillar data
salt \* pillar.itemsget a Pillar value in a state file or Jinja file (and pass a default value if no pillar is found)
{{ salt['pillar.get']('role:name', 'default') }}get Pillar value by passing a variable
{% for rt in salt['pillar.get']('network:routes:{0}:networks'.format(interface)) -%}get nested pillar value (use semi colon to get to specific key)
salt nycweb01 pillar.get users:joeget pillar data into another pillar file
pillar1.sls
/pillar/host1.sls
data: abcapply pillar at runtime
salt target state.sls nginx pillar='{"version": 26}'Master - Agent ports: 4505 (master to agent), 4506 (agent to master)
Get IP of a Minion
salt target network.ip_addrsPing from Minion
salt target network.ping someHostnameget all active TCP connections on a minion
salt target network.active_tcpget ARP table
salt target network.arptest port connectivity for certain port
salt target network.connect www.google.com 80get hardware address for a MAC
salt target network.hw_addr eth0get intet address for interface
salt target network.interface eth0get all interfaces
salt target network.interfacesFor Loop
{% for usr in 'moe','larry','curly' %}
{{ usr }}:
group:
- present
user:
- present
- gid_from_name: True
- require:
- group: {{ usr }}
{% endfor %}
If Conditional
{% if var == 2 %}
Var is 2
{% elif var == 5 %}
var is 5
{% else %}
var is not 2
{% endif %}
While loop
{% range number from 3 to 6 %}
{{ number }}
(...)
{% endrange %}
Comparisons
{% if 'Watermelon' ends with 'n' %}
It ends with N
{% endif %}
{% if varA == varB %}
{% if varA != varB %}
get shell command value from inside Jinja
{% set procs = salt['cmd.run']('ps aux') %}
disable tab space in a for loop
{% for server in servers %}
{{ server }}
{% endfor %}
results in:
server1
server2
to disable this, add
#jinja2: lstrip_blocks: True
to top of the template
Run a state only if a file doesnt exist,
{% if not salt['file.directory_exists']('/opt/q') %}
deploy_kdb:
file.managed:
- name: /opt/q.tar.gz
- source: salt://repo/q.tar.gz
extract_kdb:
archive.extracted:
- name: /opt/
- source: /opt/q.tar.gz
- user: kdb
- group: kdb
{% endif %}
Test for File
{% if not salt['file.file_exists']('/opt/file') %}
render parameter
{{ var_name }}
set a parameter
{% set fruit = 'apple' %}
Iterate dictionary (For Loop)
{% for name, app in applications.iteritems() %}
{{ name }}
{{ app['version'] }}
{% endfor %}
sort a list
{% for vm in vcenter['vm_list']|sort %}
{{ vm }}
{% endfor %}
or by attribute or reverse
{% for vm in vcenter['vm_list']|sort(attribute='osname', reverse = True) %}
get total # of elements in list
{{ myList|length }}
If statement with AND & OR
{% if var is None and var2 == 'blah' % or val3 == 'shmaa' %}
convert variable uppercase / lowercase
{{ somevar | upper }}
set a default value if value doesnt exist
{{ somevar or 'default message here' }}
match by regex
{% if grains.id | regex_match('nyc(.*)', ignorecase=True) %}
remove element from List
{% set myList = ["a", "b", "c"] %}
# remove B
{% set idx = myList.index("b") %}
{% set myList = myList.pop(idx) %}
pass a parameter to jinja template (from parent jinja template)
in user.j2
difference
{{ [1, 2, 3] | difference([2, 3, 4]) | join(', ') }}
>> 1
avg, min, max, is_list,
{{ [1, 2, 3] | avg }}
generate random UID
{{ 'random' | uuid }}
date format
{{ 1457456400 | date_format }}
{{ 1457456400 | date_format('%d.%m.%Y %H:%M') }}
2017-03-08
08.03.2017 17:00
string to number
{{ '5' | to_num }}
run Salt execution module
{{ salt.cmd.run('whoami') }}
{{ salt.group.add('newgroup1') }}
regex match
{{ 'abcd' | regex_match('BC(.*)', ignorecase=True) }}
regex search
{{ 'muppet baby' | regex_search('pet(.*)', ignorecase=True) }}
>> baby
compare_lists, compare_dicts
{{ [1,2,3] | compare_lists([1,2,4]) }}
>> {'new': 4, 'old': 3}
list files in a directory
{{ '/etc/salt/' | list_files | join('\n') }}
escape Jinja syntax
{% raw %}
some text that contains jinja {% characters that need to be escaped
{% endraw %}
iterate a dictionary
parent_dict = [{'A':'val1','B':'val2'}]
{% for item in parent_dict %}
{% for key,val in item.items() %}
{{ key }} {{ val }}
generate random password hash
python -c "import crypt; print(crypt.crypt('password', crypt.mksalt(crypt.METHOD_SHA512)))"