Disable "スマート引用符" (smart quotation) because will change neutral quotation (" 0x22, ' 0x27) to right quotation in a smart bothering way (https://programmer-life.work/apple/double-quotation-mac)
By default Apple's Mail.app is set to create email with "rich-text-format", and by this email with attachment created by Mail.app will have "Multipart/alternative" as its "Content-Type". This content-type is NOT correct, and for some mailer like Mew, the email's attachment cannot be retrieved. The correct "Content-Type" is "Multipart/mixed", and this can be set by creating email with "standard-text" in Mail.app.
See also: https://katiefloyd.com/blog/get-rid-of-inline-attachments-in-apple-mail
Only terminal command seems good. But this is not working either: https://katiefloyd.com/blog/still-struggling-with-inline-attachments-in-apple-mail
Also, \ (backslash) should be input with Option+¥ or maybe fn+¥. To permanently input \, set the keyboard from environment setting.
Development environment setup in Mac: http://qiita.com/takustaqu/items/a328baf15862faa086ee?utm_source=Qiita%E3%83%8B%E3%83%A5%E3%83%BC%E3%82%B9&utm_campaign=87d15d3f7d-Qiita_newsletter_227_09_28_2016&utm_medium=email&utm_term=0_e44feaa081-87d15d3f7d-33196753
Visual Studio Code for Mac: http://qiita.com/akiko-pusu/items/185f4fd8484ecd3b3243
backspace -> delete
delete -> fn + delete
Alt + F4 -> Command + Q (Better use Command + Q, instead of clicking red button)
Force quit: Shift + Control + Esc -> Command + Option + Esc
excel: F2 -> Ctrl + U
Rename: F2 -> Enter
Open: Enter -> Command + DownArrow OR Command + O
Finder, go to parent folder: Backspace -> Command + UpArrow
Refresh: F5 -> Command + r
Select all: Ctrl + a -> Command + a
Select multiple files in explorer / xtrafinder: Ctrl + Click -> Command + Click
Summon the dock: Control + Function + F3. Press F + Enter to open finder.
To show hidden file in Finder: Command + Shift + . (dot)
Finder, navigate directory: Arrow
Finder, enter directory: Command + DownArrow
Finder, move to parent directory: Command + UpArrow
Minimize window: Command + M
Hide and minimize all windows: Command + Option + H + M
Select, resume a minimized-window to focus: Use Command-Tab to cycle to the desired application and then, while still holding down Command, press the up or down arrow. This will show the application's windows in Expose. Select the desired window with the arrow keys and press Return to activate it.
See desktop: thumb+3fingers spread (reverse of opening launch pad)
Scroll: 2 fingers (see environment setting)
Drag: 3 fingers
Delete file on desktop (Right click, trash): Command + Delete
Mission control: 4 fingers up
Cursor movement: Ctrl + A, E, P, F, H, N, B, D
Screen shot: Command + Shift + 3 or 4. Esc to cancel.
https://support.apple.com/ja-jp/HT201361
To paste to clipboard: Ctrl + Command + Shift + 3 or 4
For copy, paste, cut, use command key instead of control key: Command + C, V, Z, Y (or Command + Shift + Z for redo)
To paste as text: Command + Shift + V
To move file/folder, first Command + C at the file/folder, then navigate to the destination and press Command + Option (Alt) + V <== Use XtraFinder to ease your pain...
Set from the system settings to enable using Tab to select in Dialog. Then, select using space NOT tab.
http://www.macworld.com/article/1161022/copy_paste_file_paths.html
Command + Shift + G
To move file:
Command + C
Option + Command + V
PageUp, Down: fn+Up, Down arrow. Top of Page: fn+LeftArrow, Bottom of Page: fn+RightArrow
Or see better touch tool.
Show input source status in mac: https://pqrs.org/osx/ShowyEdge/
書類ごとに入力ソースを自動的に切り替える
Switch between desktop screen:
Control + Right/Left arrow
http://osxdaily.com/2011/09/06/switch-between-desktops-spaces-faster-in-os-x-with-control-keys/
When working with excel, then it is good to trim first the cell. So, put the next function to cell B1, that is, to trim the cell A1. Then, copy as value.
TRIM( CLEAN( SUBSTITUTE( A1, CHAR(160), " " ) ) )
Copy formula down column without having to drag the corner: After selecting all range in the column, write the formula in the first cell on the top, and then Ctrl + Enter.
To delete rows with certain condition, use auto filter to show the rows to be deleted first, then delete those rows.
The remove duplicate functionality in Data tab, "Remove Duplicates" works in case NON-sensitive manner. So, "LR" "Lr" "lR" "lr", are all duplicates.
To remove duplicates with case-sensitive manner, use below algorithm:
Insert a new column. This column can be filled (編集・データ・連続データの作成) from 1 to number of final row.
Sort using two columns, first priority is the column where duplicates need to be removed. Second priority sort key is the number of row filled column.
See the method here http://www.excelforum.com/excel-general/742505-removing-duplicates-in-a-case-sensitive-manner.html:
B1: =$A1
B2: =REPT($A2,SUMPRODUCT(--EXACT($A2,$A$1:$A1))=0), copied down
The values "left behind" in Col B are the unique values
Delete rows with empty B column.
Sort by number of row filled column.
Save as "Tabbed text file". Basically, save as Mac text format.
Select entire worksheet: Command + a
Filling down a column without dragging over evey cell:
Double click lower - right square.
Mac text file is ended with CR (¥r, shown as ¥M in less), Unix is LF (¥n), and Windows is CR+LF (see http://www.westwind.com/reference/os-x/commandline/text-files.html#text-formats).
http://stackoverflow.com/questions/6373888/converting-newline-formatting-from-mac-to-windows
Don't forget to put end-of-line at the end-of-file of a tabbed text exported from excel. Use "vi".
Change mac EOL to unix EOL. The original file is saved as mactext.txt.bak (below, some variants. -pi only for overwrite):
perl -pi.bak -e 's/¥r/¥n/g' mactext.txt
perl -pi -e 's/¥r/¥n/g' mactext.txt
perl -pi -e 's/¥t/;/g' mactext.txt
perl -pi -e 's/¥r¥n?/¥n/g' macordostext.txt
Or, first we can use install "brew install dos2unix", then if want to convert the pbcopy-ed from excel, do this:
pbpaste | mac2unix
pbpaste | mac2unix > pupu2 ; sed -i '' -e '$a\' pupu2 # (-i '' is just for mac) add a new line to the final line (or, with vi: G A Return Esc dd ZZ(save).
pbpaste | mac2unix | awk '{print $0}' # automatically add new line.
pbpaste | mac2unix | python -c "import sys;[sys.stdout.write(line.strip() + '\n') for line in sys.stdin]" # strip the line
Or to check excel script and 2* sanity (do not strip line, to detect if there is leading/trailing white space):
pbpaste | mac2unix | awk '{print $0}' | diff 22 -
See also:
http://unix.stackexchange.com/questions/31947/how-to-add-a-newline-to-the-end-of-a-file
https://wiki.python.org/moin/Powerful%20Python%20One-Liners
Checking number of cores in Mac (logical cpu is due to hyperthreading): sysctl hw.physicalcpu hw.logicalcpu
Checking number of cores in Ubuntu:
lscpu : All info
cat /proc/cpuinfo | grep processor | wc -l : Number of CPU
cat /proc/cpuinfo | grep 'core id' : Number of cores
nproc : number of cores
In bash, incremental backward search of history is done with C-r, control-r, ^-r
Control+r
To not insert command to history, add a space before the command.
By the way:
control+s, c-s, ^-s : incremental forward search of history
control+u, c-u, ^-u : clears the line BEFORE cursor position
control+k, c-k, ^-k : clears the line AFTER cursor position
control+y, c-y, ^-y : yank from the kill ring
control+_ , c-_ , ^-_ : undo last bash action
Meta-d : delete word, Meta-f : move one word
Meta-t : swap word before and after cursor
C-r, C-s
C-k, C-u
C-y, C-_
M-f, M-b, M-t
open .
Change dir to the previous directory: cd -
Usage: cd .. && whichdiff.pl tmp tmp2 && cd -
However, in the above with "&&", if whichdiff fail, then it will NOT back to the previous directory, so better use ";" instead of "&&" in this case: cd .. ; whichdiff.pl tmp tmp2 ; cd -
Put group of command inside parentheses to preserve the current dictionary:
cd ..; diff a b : Change dir to parent, then do diff. The pwd become parent.
(cd ..; diff a b) : Temporarily change dir to parent, then do diff at parent. After finished pwd is still the same.
Add line number with awk:
awk '{printf NR ";" $0}' filename > filenamenumbered
Remove lines which include NON-ASCII text:
perl -i.bak -ne 'print unless (/[^[:ascii:]]/)' file.txt
Before doing any processing below, add newline to the end-of-file first. Remove line with a non-valid 3rd column (";" as delimiter, x$3 means non-blank), remove line with a NON-ASCII 3rd column (use LC_CTYPE), then remove duplicates without sort:
LC_CTYPE=C awk -F";" 'x$3 && $3!="," && $3!="¥",¥"" && $3 !~ /[^[:alnum:][:space:][:punct:]]/ && $3 !~ /----/' input.txt | awk -F";" '!seen[$3]++'
Show filename and line number when doing grep:
Use -H: grep -niH 'sampo' 0*
To exclude for example ~ file:
grep -niH --exclude="*~" 'sampo' 0* to_read*
To exclude tilde from wildcard expansion:
shopt -s extglob
wc -l 0*!(*~)
shopt -u extglob
In BASH, overwrite existing file during redirection: >|. See http://unix.stackexchange.com/questions/45201/bash-what-does-do
pbcopy, pbpaste (copy or paste to or from pasteboard):
To copy without color (ls or grep), use the original ls or grep:
/bin/ls -1 | /usr/bin/grep HP | pbcopy
Usage of tee (http://linux.101hacks.com/unix/tee-command-examples/)
findwords_inc.py 'ch' 2* | tee pupu | idtxt2ph > pupu2 ; paste -d ':' pupu pupu2 >| pupu3 ; rm -f pupu pupu2
Or use bash Process Substitution: paste <(./prog1) <(./prog2): No space between < and (:
http://stackoverflow.com/questions/1569730/paste-without-temporary-files-in-unix
http://www.gnu.org/software/bash/manual/html_node/Process-Substitution.html
paste <(sed -n 34,79p file1) <(sed -n 34,79p file1 | idtxt2ph)
Process substitution replace <(cmd) with a temporary file that store the result of cmd:
https://linuxacademy.com/blog/linux/ten-things-i-wish-i-knew-earlier-about-the-linux-command-line-2/
To substitute multi-space to one-space:
testsuit/zhphchk.sh testsuit/temp.3.in | sed -n 's/ \+/ /gp' | diff <(sed -n 's/ \+/ /gp' testsuit/temp.3.out) -
bash -c "testsuit/zhphchk.sh testsuit/temp.3.in | sed -n 's/ \+/ /gp' | diff <(sed -n 's/ \+/ /gp' testsuit/temp.3.out | head -16) -" maybe better than the above.
Or in case the above command with "process substitution" is to be invoked with "make check" in Makefile, then edit Makefile as below:
timing:
bash -c "time testsuit/zhphchk.sh testsuit/temp.3.in | sed -n 's/ \+/ /gp' | diff <(sed -n 's/ \+/ /gp' testsuit/temp.3.out) -"
Or, to remove command repetition, substitute the command first to a bash variable:
tmp="sed -n 34,79p file"; paste <($tmp) <($tmp | idtxt2ph) ; unset tmp
yes | rm pupu*
cat 2[2-8]*; cat 2[3,5,7,8]*
Subtract line:
grep -F(not regex match) -x(match whole line) -v(invert) -f(pattern in file) fileB fileA
Ignore blank line (Number field is 0, false) and comment line in awk:
awk -F'\t' 'NF && !/^($|[:space:]*#)/ {print $2}' ~/tabseptext.txt
http://stackoverflow.com/questions/11267015/how-to-ignore-blank-lines-and-comment-lines-using-awk
Ignore blank line and comment line, then count the number of non blank column 3:
awk -F';' 'NF && !/^($|[:space:]*#)/ && !/^\// {print $3}' id_dictsrc | grep -c '[^[:space:]]'
To compare line by line, use comm. However, must be sorted first:
cat -n file1 > nfile1
cat -n file2 > nfile2 ; nfile1 and nfile2 is sorted by cat
comm -2 -3 nfile1 nfile2 > file3
Batch processing of sox:
brew install lame
brew install sox (install with this order to enable mp3)
for i in *.wav; do echo $i; sox $i ${i%%.wav}.raw; done
http://stackoverflow.com/questions/27264156/sox-batch-process-under-debian
To remove from start instead of end(%%.wav), use #:
for i in v13_*.txt; do mv $i vxx_${i#v13_}; done
Sox to trim and concatenate:
$ sox "|sox sale.wav -p trim 0.0 =0.788979" "|sox sale.hts.wav -p trim 0.97 -0.0" tmp.wav
sox for Time Scale Modification
play ~/tmp/ff011211.wav tempo -s 0.5
-s for speech
Time Scale Modification and pitch modification with WSOLA. Implementation library: SoundTouch. Application SoundStretch:
Vaja7 batch processing:
lpf.pdf must be in the current execution directory.
./Vaja7 -tts “สวัสดีฉันชื่อนก\ Hi\ I\ am\ nok\ I\ can\ say\ anything.” -o test.wav
$ echo “สวัสดีฉันชื่อนก\\ Hi\\ I\\ am\\ nok\\ I\\ can\\ say\\ anything.” | xargs ./Vaja7 -o ../testcase/test.wav -tts
$ tail -n +1 ../testcase/*.txt
$ for txtf in ../testcase/test?.txt; do echo $txtf; cat $txtf | xargs ./Vaja7 -o ${txtf%%.txt}.wav -tts ; done
To prompt user input at each for process:
$ for wav in test*.wav; do echo $wav; play $wav; read -p "Press return" press; done
PERL one liner for checking clipping:
th=30e3
sox tmp.wav -t raw - | perl -e 'undef($/);foreach $s(unpack("s*",<>)){if('$th'<abs($s)){print "1\n";exit}}print "0\n"'
for i in *.wav; do echo $i; sox $i -t raw - | perl -e 'undef($/);foreach $s(unpack("s*",<>)){if('$th'<abs($s)){print "NG\n\n";exit}}'; done
Above for combination with bash foreach.
A better one is to use sox:
sox input.wav -n stat
If the volume adjustment is more than 1.0, then no clipping occurs. If volume adjustment is 1.0, then clipping occurs.
Find all files with wav extension below a directory:
find directory -type f -name "*.wav"
Take the filename only from fullpath: find dir04* -type f -name \*.wav | sed 's/.*\///'
awk '{printf("26_%05d.wav\t%s\n", NR, $0)}' srctxt/26_narration_790
grep -f patternfile filetogrep
How to uncomment lines including some strings using sed:
1. Write the strings substitution pattern to a file: retakeid.sed
awk '{printf("/%s/ s/^#*/# /;\n", $1)}' retakeid >| retakeid.sed
http://superuser.com/questions/719073/how-can-sed-get-patterns-from-a-file
2. sed command load pattern from files, and to_read.txt.fr is to be commented out. Notice LANG=C:
LANG=C sed -f retakeid.sed read_txt/to_read.txt.fr
http://stackoverflow.com/questions/19242275/re-error-illegal-byte-sequence-on-mac-os-x
Play wav/raw file from terminal:
Play (part of brew install sox): play -t raw -r 44100 -e signed -b 16 -c 1 fr010000.pcm
sox -r 48k -e signed -b 16 -c 1 075.raw -t wav - | play -t wav -
Play wav file that include "internis" in the script:
grep -i internis to_read.txt => 27_00383.wav includes internis
(or afplay) play `find ~/wavdir -type f -name \*.wav | grep 27_00383`
Check wav file information:
soxi $(\ls *.wav | head -3)
http://stackoverflow.com/questions/15691977/why-start-a-shell-command-with-a-backslash
Extract range of lines:
sed -n 3,10p file
http://stackoverflow.com/questions/83329/how-can-i-extract-a-range-of-lines-from-a-text-file-on-unix
Print lines between 2 patterns:
awk '/#### REC 2016\/04\/02/,/#### REC YYYY/' file
http://www.shellhacks.com/en/Using-SED-and-AWK-to-Print-Lines-Between-Two-Patterns
Print from line number 5 to end of file: awk 'NR>=5' file
Print from line number 5 to 9: awk 'NR>=5 && NR<=9' file
Move multiple files to multiple folders based on name of files:
http://stackoverflow.com/questions/18622907/only-mkdir-if-it-does-not-exist
for f in *.pcm; do mkdir ${f:2:4} 2>/dev/null; mv $f ${f:2:4}/; done
${f:2:4} means substring from f[2] with LENGTH 4.
2>/dev/null means redirect stderr to /dev/null
echo "nama saya /, aaa , juga bbb/, deh" | sed 's/\/,/ /g' | idtxt2ph
for i in *.txt; do echo ${i/pattern/string}; done
This will replace pattern with string.
Using screen to re-attach to a session when connection is down:
http://www.ibm.com/developerworks/aix/library/au-gnu_screen/
Type screen and attach to this screen. Do work in this screen.
To detach: C-a d
To attach: screen -x
To list: screen -ls
Typical usage: in the session of the screen, the connection down. Re-attach with screen -x.
To move to the beginning of the line while using screen in bash, use:
Control-a (release and then) a
Or use byobu:
To create a session: byobu
Detach from session: F6
Reattach to the session: byobu
byobu ls to check available session
When byobu is first used, select C-a to be like emacs, not Screen command mode!!!!
Byobu ultimate usage:
In the remote machine host, type byobu-enable:
By doing this, next time login to this remote machine will be automatically byobu-ed
To detach WITHOUT logout, press Shift + F6. This is to prevent log out completely. Go to non-byobu-managed terminal to set such as byobu-enable, byobu-disable, byobu-enable-prompt, etc.
Shift + F6
byobu ls
byobu
byobu kill-session (https://qiita.com/miyashiiii/items/90ba726dd331ae103b7b)
Exit
To detach and logout, press F6 (Do this. Session is preserved. Exit to completely terminate session.)
Exit will terminate the login / ssh session.
byobu-disable
Multiple window:
Press F2. Exit to kill window.
Press F3, F4 to move between windows.
F7 lets you view scrollback history in the current window.
Multiple pane:
Press Shift + F2 (F12 + | (pipe)). Split horizontally.
Exit to kill pane
Shift + Arrow to move between pane.
F7 lets you view scrollback history in the current window.
Press Ctrl + F2 to split pane vertically, but NOT working in El Capitan. So, F12 + %, to split vertically.
http://stackoverflow.com/questions/26180096/os-x-byobu-vertical-split
byobu tmux.conf: https://askubuntu.com/questions/830484/how-to-start-tmux-with-several-panes-open-at-the-same-time
To open byobu with predefined layout "from inside active byobu":
~/.byobu/.tmux.conf:
new # <- seems will always be needed... only needed when byobu is NOT provided by the docker command line argument.
neww
splitw -h
# splitw -v
tmux source ~/.byobu/.tmux.conf
Login with default byobu layout. new session (new) and new window (neww) are provided by default.
~/.byobu/.tmux.conf is BAD!
echo '2' | byobu-ctrl-a
In byobu, there is status bar the shows available update:
To check available update: /usr/lib/update-notifier/apt-check --human-readable
sudo apt-get update
sudo apt-get upgrade
sudo apt-get dist-upgrade
sudo apt autoremove
(nice -n 19) nice -20 cmd arg1 &; nice -n 19 cmd arg1 (no &)
for f in *.txt; do
echo == $f
(nice -n 19) nice -20 cmd.sh ${f%.txt} ------- & (better don't push to back ground)
done
ps -fl or ps -fl -C "cmd.sh"
ps auxww
ps -elf | egrep 'username|NI'
top
http://stackoverflow.com/questions/8518750/to-show-only-file-name-without-the-entire-directory-path
List the existing filename in ../listenv10/*.wav: /bin/ls -1 ../listenv10/*.wav | xargs -n 1 basename
Check any difference in existing file, i.e., both directories have same file existing or not:
/bin/ls -1 *.wav | diff <(/bin/ls -1 ../listenv10/*.wav | xargs -n 1 basename) -
In the above, we use /bin/ls , because the usual ls may contain COLOR code.
Remove all between first and last "/", and also the first and last "/":
echo "Nama /root/is/here/hi hello" | sed 's/\/.*\///'
Awesome awk usage:
awk '($1=="N"||$1=="n"){pre="~V";pst="~_BND_";
if($3~/^[aeiou@]$/){pre="V"}
if($4=="_WB_"||$4~/pau/){pst="_BND_"}
a[pre,$1,pst]++}
END{for(i in a){i2=i;gsub(/\034/," ",i2);print i2,a[i]}}' \
temp.ph_dur_ \
| sort
awk '($1=="N"||$1=="n")&&$3~/^[aeiou@]$/&&($4=="_WB_"||$4~/pau/){
a[$3,$1]++}
END{for(i in a){i2=i;sub(/\034/," ",i2);print i2,a[i]}}' \
temp.ph_dur_ \
| sort
For puzzle:
echo MARTABAK MANIS BANGKA | fold -w 1 | sort | paste -s -d'\0' | pbcopy
echo MARTABAK MANIS BANGKA | grep -o . | sort | paste -s -d'\0'
echo "c b a" | grep -o . | sort | paste -s -d'\0' | diff <(echo "b a c" | fold -w 1 | sort | paste -s -d'\0') -
Take diff only for a specific file:
diff -x '*.foo' -x '*.bar' -x '*.baz' /destination/dir/1 /destination/dir/2
! (bang): NO '*.wav'; -X: use exclude pattern.
find a b -type f ! -name '*.wav' -printf '%f\n' | diff -r a b -X -
Non-GNU diff (no printf, mac): find a b -type f ! -name '*.wav' -print | sed -e 's|.*/||' | diff -X - -r a b
For example, only param files need to be diff-ed. However, the directory details need to be 'cut':
for pf in ~/longdirectoryname/*.param; do echo ${pf:(-12)}; cut -d '/' -f 3- $pf > tmp1/${pf:(-12)}; done
for pf in *.param; do echo $pf; cut -d '/' -f 7- $pf > tmp2/$pf; done
whichdiff.pl tmp1 tmp2
Regarding cut: http://stackoverflow.com/questions/971879/what-is-a-unix-command-for-deleting-the-first-n-characters-of-a-line
Regarding accessing characters in bash string variable:
http://stackoverflow.com/questions/19858600/bash-accessing-last-x-characters-of-string
LINE="/string/to/cut.txt", LINE=${LINE%/*} http://stackoverflow.com/questions/4563060/how-to-cut-the-last-field-from-a-shell-string
$ foo=1:2:3:4:5; $ echo ${foo##*:} http://stackoverflow.com/questions/3162385/how-to-split-a-string-in-shell-and-get-the-last-field
The ultimate check with process substitution:
$ diff <(cat ~/long/directory/name/*.param | cut -d '/' -f 3-) <(cat *.param | cut -d '/' -f 7-)
Split a file to files containing one line of each line in the input file:
split -l 1 tts_test2.txt tst_ (prefix)
for i in tst_*; do echo $i; mv $i $i.txt; done
Wav file difference:
for wavf in *.wav; do ~/pywork/wavdiff.py $wavf ~/work/test/listen/$wavf >> diffres.txt; done
Text corpus operation:
Check corpus size (in hour):
$ f=v12.temp.ph_dur_
$ echo $f
v12.temp.ph_dur_
$ sort -g $f | awk '{s+=$7} END {print s/60/60}'
Check number of corpus sentences:
awk '{a[$8]++} END { for(i in a){c++} print c }' $f
Check from the script to_read:
$ awk -F'\t' 'NF && !/^($|[:space:]*#)/ && !/^\// {print $1}' new_to_read.txt.fr | grep -c '[^[:space:]]'
$ awk -F'\t' 'NF && !/^($|[:space:]*#)/ && !/^\// {print $2}' new_to_read.txt.fr | diff testsuit/to_read2 -
cat testsuit/input.txt | ./progname | colordiff testsuit/outputref.txt - | less -R
cat testsuit/input.txt | ./progname | git diff --color-words testsuit/outputref.txt - | less -R
For svn diff with color:
vi ~/.subversion/config, then diff-cmd = colordiff. That's it.
or: svn diff changedfile | {colordiff, view -, vim -R -}
Check directory utilization:
du -sk (1kilo) du -sb (1byte) du -s -BM (1Mbyte) (s is summary)
du -k --max-depth=2 | sort -k1 -nr | cut -f1
To cat with filename all the file with extension rb in app directory:
find app -type f -name "*.rb" -print0 | xargs -0 tail -n +1 | less
find app -type f -name "*.rb" -print0 | xargs -0 grep -nH "" | less
Prepend / insert before the first line of text. USE GSED (GNU-sed)
https://unix.stackexchange.com/questions/99350/how-to-insert-text-before-the-first-line-of-a-file
gsed -i.rm0 '1 i\# -*- coding: utf-8 -*-' filesaya
Above is doing in-place prepending. Remove *.rm0 later.
How to prepend / insert before the first line of text, below app directory, with a specific file extension.
find app -type f -name "*.rb" -print0 | xargs -0 gsed -i.rm0 '1 i\# -*- coding: utf-8 -*-'
Check diff with process substitution: diff <(find app -type f -name "*.rb" -print0 | xargs -0 cat) <(find app -type f -name "*.rm0" -print0 | xargs -0 cat)
Check: find app -type f -name "*.rb" -print0 | xargs -0 grep -niH utf
Remove *.rm0: find app -type f -name "*.rm0" -print0 | xargs -0 rm -f
NOW, How to append! :
https://unix.stackexchange.com/questions/20573/sed-insert-text-after-the-last-line
gsed -i.rm0 -e "\$a# End-of-File" filesaya
find app -type f -name "*.rb" -print0 | xargs -0 gsed -i.rm0 -e "\$a# End-of-File"
Execute each line in a text file:
The file contents is something like:
file1dst.wav;file1src.wav;
file2dst.wav;file2src.wav;
...
for LINE in `cat one_randtable.txt `; do IFS=';' read -r -a array <<< "$LINE"; diff "${array[0]}" "${array[1]}"; done
Average with awk
awk 'BEGIN {sum=0; n=0} { sum += $2; n++ } END { if (n > 0) print sum / n; }'
awk -F'\t' '{sum+=$5} END {print sum}' report.txt
Find maximum element of column 2:
awk -F'\t' 'max<=$2 || NR==1{ max=$2; data=$0 } END{ print data }' report2_10k.txt
print out record with column value > 1000:
awk -F'\t' '$3>1000 {print $0}' report2b_40k.txt
Awk partly match:
awk -F";" '$2~/v13_/ && $4==1 {print}' file_result.txt | wc -l
Sort by second column. Delimiter is ';'
/usr/bin/egrep 'ot006|ot013|ot015|ot022|ot024|ot027|ot029|ot030|ot032|ot034|ot087|ot046|0t048|ot053|ot054|ot055|ot062|ot079|ot083|ot085|ot089|ot097|ot048' one_randtable.txt | sort -t';' -k2 | pbcopy
Batch processing with for (txt2wav):
for f in *.txt; do echo == $f; w=0.5; echo $w; tgt_FEAT=$w tgt_POW=0 tgt_CEP=0 tgt_LF0=1 FIXED_CAND_NUM_THRESHOLD=100 nice -20 /home/user/public_html/multi_lang_tts/idn_tts/tts/txt2wav_14.sh $f; done
Randomly pick up lines from text file. Then use Ruby to handle multibyte text:
shuf -n 40 temp.txt | ruby -Ke -ape 'sub(/@@..*/,"")'
hexdump -C
Change hexadecimal input to decimal: echo "ibase=16; F" | bc
Change decimal to hexadecimal output: echo "obase=16; 15" | bc
Using printf shell built-in command to convert hexadecimal input to decimal: printf "%d\n" 0xf
Using printf shell built-in command to convert decimal to hexadecimal input: printf "%x\n" 15
Unfortunately, cannot use printf for binary. Must use bc.
How to add UTF-8 BOM to pupu.txt:
https://stackoverflow.com/questions/3127436/adding-bom-to-utf-8-files
printf '\xEF\xBB\xBF' > with_bom.txt
cat pupu.txt >> with_bom.txt
To remove newline character: cat no_bom.txt | tr -d '\n' > pupu.txt .
Check with file pupu.txt .
bash for in done, to show which files with the same name differ in directory A and B:
cd A
for fpy in *.py; do diff -q $fpy ~/B/$fpy; done
for fpy in movie*.py; do diff -q $fpy ~/B/$fpy; done
for fpy in aaa.py bbb.py ccc.py; do echo $fpy; diff -q $fpy ~/B/$fpy; done
sort decrease-order by column 2 of file report10pct.txt (which is delimited by '¥t' tab). Then show first 10 lines, print column 2 ($2). In case print all column, print $0 in awk:
sort -t$'\t' -k2 -nr report10pct.txt | head -10 | awk '{print $0}'
Redirect time command stderr output to file. Be careful, the space exist in the command:
{ time /usr/bin/python3 $tilde/sh/tsmhalfer.py $ff.g.wav ; } >> /work/log/tsm5558 2>&1
Random pickup 20 files from a directory:
ls /adirectory/ | \grep wav | sort -R | tail -20 | while read file; do cp /adirecory/$file $file; done
Other?
Assume file with multiline record as below:
% less tsmlog
real 0m0.174s
user 0m0.150s
sys 0m0.012s
Duration : 00:00:01.69 = 37176 samples ~ 126.449 CDDA sectors
real 0m0.176s
user 0m0.156s
sys 0m0.008s
Duration : 00:00:01.69 = 37176 samples ~ 126.449 CDDA sectors
real 0m0.175s
user 0m0.143s
sys 0m0.020s
Duration : 00:00:01.69 = 37176 samples ~ 126.449 CDDA sectors
real 0m0.169s
user 0m0.140s
sys 0m0.020s
Duration : 00:00:01.69 = 37176 samples ~ 126.449 CDDA sectors
The code to parse multiline record is as below:
% awk '
BEGIN{RS="\n\n";FS="\n"}
# BEGIN{RS="";FS="\n"}
{for(i=1;i<=NF;i++){
if($i~/real/){a0=$i}
if($i~/Duration/){print a0,$i}}}
' tsmlog
* The above is, for each field, separated by "\n", (i.e. `real 0m0.169s`, `user 0m0.140s`, etc.} do the 2 if checking.
* `$i~/real/` means that field `$i` match `/real/`
* `BEGIN{RS="\n\n";FS="\n"}` can be replaced with `BEGIN { RS = "" ; FS = "\n" }`, where RS == "" means records are separated by runs of blank lines.
Other method is by using pattern matching in awk, as below:
awk '/^real/{a0=$0}/Duration/{print a0,$0}' tsmlog
* when real's line is found, the $0 (whole line) is saved to a0.
* Awk is like `/pattern/{action}`. If the action is specified, by default it is `{print $0}`.
AirDrop from iOS to MacOS: set both device's bluetooth and iCloud on. For the sender device, AirDrop maybe OFF. For the receiver device, AirDrop should be receiving with permitting contact list or everybody.
When printing to pdf, preview the pdf using "Preview.app". After Preview.app is opened, do Export (NOT "Export as PDF"), and set quartz filter to "Reduced file size". Then we can have a reduced size pdf.
Slow pdf preview.app:
qlmanage -p ファイル名
Binary file diff:
http://superuser.com/questions/125376/how-do-i-compare-binary-files-in-linux
xxd b1 > b1.hex; xxd b2 > b2.hex; diff b1.hex b2.hex; vimdiff b1.hex b2.hex
* XtraFinder: double click the tab part to get double pane
NOT usable in El Capitan
Commander One: http://mac.eltima.com/file-manager.html, but XtraFinder was MUCH MUCH BETTER.
* Karabiner:
- Disable Ctrl + Space from launching Spotlight (use Mac's system setting).
- Karabiner setting:
- The private.xml (In VIRTUALMACHINE, VirtualBox, do not change Control+c etc to Command+c):
<?xml version="1.0"?>
<root>
<item>
<name>ivans</name>
<item>
<name>Control+O to Command+Space</name>
<identifier>private.control_o_to_command_space</identifier>
<autogen>__KeyToKey__ KeyCode::O, ModifierFlag::CONTROL_L,
KeyCode::SPACE, ModifierFlag::COMMAND_L</autogen>
</item>
<item>
<name>Control+zxscv to Command+zxscv NOT for EMACS, TERMINAL, VIRTUALMACHINE</name>
<identifier>private.control_zxscv_to_command_zxscv</identifier>
<not>TERMINAL, EMACS, VIRTUALMACHINE</not>
<autogen>__KeyToKey__ KeyCode::C, ModifierFlag::CONTROL_L,
KeyCode::C, ModifierFlag::COMMAND_L</autogen>
<autogen>__KeyToKey__ KeyCode::X, ModifierFlag::CONTROL_L,
KeyCode::X, ModifierFlag::COMMAND_L</autogen>
<autogen>__KeyToKey__ KeyCode::V, ModifierFlag::CONTROL_L,
KeyCode::V, ModifierFlag::COMMAND_L</autogen>
<autogen>__KeyToKey__ KeyCode::S, ModifierFlag::CONTROL_L,
KeyCode::S, ModifierFlag::COMMAND_L</autogen>
<autogen>__KeyToKey__ KeyCode::Z, ModifierFlag::CONTROL_L,
KeyCode::Z, ModifierFlag::COMMAND_L</autogen>
</item>
<item>
<name>Control+y to Command+shift+z NOT for EMACS, TERMINAL, VIRTUALMACHINE, MSOffice</name>
<identifier>private.control_y_to_command_shift_z</identifier>
<not>TERMINAL, EMACS, VIRTUALMACHINE, EXCEL, POWERPOINT, WORD</not>
<autogen>__KeyToKey__ KeyCode::Y, ModifierFlag::CONTROL_L,
KeyCode::Z, ModifierFlag::COMMAND_L | ModifierFlag::SHIFT_L</autogen>
</item>
<item>
<name>Control+y to Command+y for MSOffice</name>
<identifier>private.control_y_to_command_y</identifier>
<only>EXCEL, POWERPOINT, WORD</only>
<autogen>__KeyToKey__ KeyCode::Y, ModifierFlag::CONTROL_L,
KeyCode::Y, ModifierFlag::COMMAND_L</autogen>
</item>
<replacementdef>
<replacementname>EMACS_MODE_MARKSET_EXTRA</replacementname>
<replacementvalue>
<![CDATA[
<autogen>
__KeyToKey__
KeyCode::C, MODIFIERFLAG_EITHER_LEFT_OR_RIGHT_CONTROL | ModifierFlag::SHIFT_L | ModifierFlag::NONE,
KeyCode::VK_LOCK_SHIFT_L_FORCE_OFF,
KeyCode::C, MODIFIERFLAG_EITHER_LEFT_OR_RIGHT_COMMAND,
KeyCode::VK_CONFIG_FORCE_OFF_notsave_emacsmode_ex_controlSpace_core
</autogen>
<autogen>
__KeyToKey__
KeyCode::X, MODIFIERFLAG_EITHER_LEFT_OR_RIGHT_CONTROL | ModifierFlag::SHIFT_L | ModifierFlag::NONE,
KeyCode::VK_LOCK_SHIFT_L_FORCE_OFF,
KeyCode::X, MODIFIERFLAG_EITHER_LEFT_OR_RIGHT_COMMAND,
KeyCode::VK_CONFIG_FORCE_OFF_notsave_emacsmode_ex_controlSpace_core
</autogen>
<autogen>
__KeyToKey__
KeyCode::D, MODIFIERFLAG_EITHER_LEFT_OR_RIGHT_CONTROL | ModifierFlag::SHIFT_L | ModifierFlag::NONE,
KeyCode::VK_LOCK_SHIFT_L_FORCE_OFF,
KeyCode::X, MODIFIERFLAG_EITHER_LEFT_OR_RIGHT_COMMAND,
KeyCode::VK_CONFIG_FORCE_OFF_notsave_emacsmode_ex_controlSpace_core
</autogen>
]]>
</replacementvalue>
</replacementdef>
<item>
<name>Pass through control-{x,c} as command-{x,c} in MarkSet</name>
</item>
</item>
</root>
- Command + Space is actually a eisuu <-> kana toggle.
* mi editor
* Coccinellida (port forwarder)
* iTerm2
https://codeiq.jp/magazine/2014/01/5143/
As written above, install iTerm2 and always do update.
How to show japanese in iTerm2?
Command + Shift + D: Double vertical pane
.bashrc
Create .bash_profile which contains source ~/.bashrc
script (use rsync for machome2db (bunbackup replacement) )
set the cursor for git branch.
xcode
xcode-select --install
http://www.moncefbelyamani.com/how-to-install-xcode-homebrew-git-rvm-ruby-on-mac/
Install git here also
autocrlf is input, as in unix
SourceTree git client. SmartGitHg git client.
For external diff using opendiff (The command line of FileMerge): https://answers.atlassian.com/questions/35298/external-diff-tool-filemerge-dont-start-when-i-press-the-diff-externe-button
DiffMerge good. But font too small.
To enable git fetch using .ssh/config as in: git clone host.ext:/home/git/gitsrv/rep0:
http://smartgit.3668570.n2.nabble.com/Support-for-ssh-config-on-Windows-td7379578.html
SmartGit -> 環境設定 -> Commands -> Authentication -> Use system SSH client
http://www.cse.kyoto-su.ac.jp/~oomoto/lecture/program/tips/Xcode_install/
Install Homebrew (a package manager for mac. See also macports)
Always do "brew update" before install.
brew install espeak, lv (japanese less), coreutils (to use dircolors, change the .bashrc also)
For espeak, add chinese dictionary here:http://espeak.sourceforge.net/data/
http://www.pyimagesearch.com/2015/04/27/installing-boost-and-boost-python-on-osx-with-homebrew/
brew install boost
brew install {rbenv, bash, autossh, git, svn, colordiff, gawk, translate-shell, libtool, tree, iproute2mac, bash-completion, nmap, jq (to look at JSON query)}, lesspipe
To read multiple file with less:
less -N *.cpp
:n (colon and n) to go to next file, :p to go to previous file
brew cask install {osxfuse, vagrant, docker}: Docker Desktop is installed with brew cask install docker.
colorful ls:
brew install bash-completion
Add below to ~/.bash_profile
[[ -r "/usr/local/etc/profile.d/bash_completion.sh" ]] && . "/usr/local/etc/profile.d/bash_completion.sh"
For docker:
etc=/Applications/Docker.app/Contents/Resources/etc
ln -s $etc/docker.bash-completion $(brew --prefix)/etc/bash_completion.d/docker
ln -s $etc/docker-machine.bash-completion $(brew --prefix)/etc/bash_completion.d/docker-machine
ln -s $etc/docker-compose.bash-completion $(brew --prefix)/etc/bash_completion.d/docker-compose
For kubernetes:
https://kubernetes.io/docs/tasks/tools/install-kubectl/#enabling-shell-autocompletion
$ kubectl completion bash > kubectl
$ sudo cp kubectl /usr/local/etc/bash_completion.d/
To check the packages and the version installed by brew:
brew list
brew info boost
brew cask list
brew cask info docker
Update/upgrade procedure. When XCODE is updated through AppStore, do belows:
xcode-select --install, to upgrade the CLT (command line tools)
brew upgrade
brew update
brew doctor
brew list
brew info gcc@4.9 (the macos requirement is <= 10.13, but mine is 10.14)
brew cask info docker, brew cask info vagrant
Check available formulae here: https://formulae.brew.sh/formula/
brew cask (homebrew-cask)
A very convenient 1 line command to install GUI application. So, no need to download dmg and then move to Application.
https://qiita.com/tsunemiso/items/9d8cd616ae72572c77d9
brew cask (will install if NOT installed already)
brew -v (confirm that homebrew-cask is installed or not)
brew cask list (list GUI application that already installed using brew cask)
brew cask upgrade
Will be installed in ~/Applications or /usr/local/Caskroom
By default, g++ and gcc is the clang version from XCode. Need to install NON-clang of g++, gcc (g++4.7.2 etc.):
* Anaconda python.
* Audacity. Import audio, then play solo for each track.
* Install racket (scheme) for mac
* Emacs26.2
% ll ~/bin/emacs
-rwxr-xr-x 1 ~/bin/emacs*
cat ~/bin/emacs:
#!/bin/sh
/Applications/Emacs.app/Contents/MacOS/Emacs "$@"
Add packages with M-x list-packages. Now only undo-tree is installed by this. Others are in .emacs.d/formac.
Mzscheme in .emacs. Use quack to set the default mzscheme, below Racket/bin, just as in windows. Be careful with directory with space.
Bigger font? http://stackoverflow.com/questions/4821984/emacs-osx-default-font-setting-does-not-persist
To specify individual file coding system, use file local variable:
For compiled c-source or Make file, in the FIRST LINE, write // -*- coding: utf-8 -*- or # -*- coding: utf-8 -*-
For a script to be passed to the interpreter, in the SECOND LINE, write # -*- coding: utf-8 -*-
(line 1) #!/usr/bin/ruby -ap
(line 2) # -*- coding: utf-8 -*-
To search using regexp:
C-M-s (control + alt + s)
At beginning of line: ^ba
Emacs php-mode:
http://qiita.com/kyanagimoto/items/8d3c81ae806f74bfae1b
(require 'php-mode)
global-auto-complete-mode
auto-mode-alist
mode-line-cleaner-alist
Emacs web-mode:
Covering all php, html editing mode.
http://web-mode.org/, Install with melpa: M-x package-list-packages.
global-auto-complete-mode
Emacs yaml-mode:
M-x package-install yaml-mode
Add below to .emacs:
(require 'yaml-mode)
(add-to-list 'auto-mode-alist '("\\.ya?ml$" . yaml-mode))
(define-key yaml-mode-map "\C-m" 'newline-and-indent)
Emacs dockerfile-mode
M-x package-install dockerfile-mode
Add below to .emacs:
(require 'dockerfile-mode)
(add-to-list 'auto-mode-alist '("Dockerfile\\'" . dockerfile-mode))
Valid for auto-complete: perl-mode cperl-mode web-mode php-mode sql-mode vbnet-mode c++-mode c-mode java-mode text-mode python-mode octave-mod css-mode actionscript-mode sml-mode markdown-mode yaml-mode dockerfile-mode
multiple-cursors, undo-tree, web-mode
anzu for showing hit occurence during incremental search (isearch)
M-x package-install RET anzu
Add below to .emacs:
(require 'anzu)
(global-anzu-mode +1)
(set-face-attribute 'anzu-mode-line nil
:foreground "red" :weight 'bold)
(custom-set-variables
'(anzu-mode-lighter "")
'(anzu-deactivate-region t)
'(anzu-search-threshold 1000)
'(anzu-replace-threshold 50)
'(anzu-replace-to-string-separator " => "))
(global-set-key [remap query-replace] 'anzu-query-replace)
(global-set-key [remap query-replace-regexp] 'anzu-query-replace-regexp)
(define-key isearch-mode-map [remap isearch-query-replace] #'anzu-isearch-query-replace)
(define-key isearch-mode-map [remap isearch-query-replace-regexp] #'anzu-isearch-query-replace-regexp)
Package elpy-mode etc.: Python IDE (elpy package)
Package multiple-cursor mc
Package srefactor (semantic refactor):
VERY SLOW. DO NOT USE.
To rename/refactor variable name in C/C++. use srefactor.
Press M-RET (Meta/Esc + return) at variable name to be renamed.
BUT, NOT at the declaration/definition of the variable.
Package latex-extra, auto-complete-auctex
Package M-x package-install yasnippet-snippets
fori [tab], cout [tab]
Use with M-x yas-insert-snippet
https://www.youtube.com/watch?v=HTUE03LnaXA (b yuksel)
Package auto-complete, yasnippet-snippets, auto-complete-c-headers I x
Package smartrep, flycheck (+smartrep C-c C-n C-n), multiple-cursors, iedit (C-c ;) for refactoring variable name
Built-in CEDET. (semantic-mode 1) but very slow... <- NOT USED
Move from auto-complete-mode to company-mode (complete any). (SO, disable ac, auto-complete etc.):
Package company-statistics-mode, company-prescient
C/C++:
Melpa stable package: company, company-c-headers
Not in melpa: company-c-preprocessor (will autocomplete #include preprocessor)
Preprocessor, header (especially helpful in boost), member function (thx company-clang).
Can also company-dabbrev for comments etc.
However, keywords such as return, exit are NOT completed.
yasnippet complete the exit, while, do, fori, for, etc.
Python:
elpy (elisp) + jedi (pip installed) is OK, see https://sites.google.com/site/ivansetiawantky/programming-misc#TOC-Python
M-. : go to definition (M-, to go back to the previous location)
C-c C-e : refactoring local variable at current cursor (or, with iedit C-c ; )
C-c C-n/p, C-c C-c
C-c C-a Execute python code buffer with arguments
Text Box
;; C-c C-a Execute python code buffer with arguments
;; https://stackoverflow.com/questions/2905575/emacs-pass-arguments-to-inferior-python-shell-during-buffer-evaluation?rq=1
;;
;; If the script is executed in shell with `python3 script.py arg`, then
;; `script.py arg` must be provided, i.e,, `Python arguments: script.py arg`
;;
;; In the minibuffer, M-p (or up-arrow) to see the recent minibuffer history.
;; Or, use below hack:
;; https://emacs.stackexchange.com/questions/24551/how-to-reuse-last-input-in-command-with-a-prompt
(setq my-python-send-buffer-def-arg "script.py arg") ;; default argument
(defun my-python-send-buffer-with-arg (args)
(interactive
(list
(read-string
(format "Python arguments [%s]: " my-python-send-buffer-def-arg)
nil nil my-python-send-buffer-def-arg)))
(setq my-python-send-buffer-def-arg args)
(let ((source-buffer (current-buffer)))
(with-temp-buffer
(insert "import sys; sys.argv = '''" args "'''.split()\n")
(insert-buffer-substring source-buffer)
(python-shell-send-buffer))))
(add-hook 'python-mode-hook
(lambda ()
(local-set-key "\C-c\C-a" 'my-python-send-buffer-with-arg)))
Melpa package sr-speedbar:
M-x sr-speedbar-{toggle, open, close}
In -nw, to shrink window, use C-x {
To repeat it 10 times: C-u 10 C-x {
https://stackoverflow.com/questions/4987760/how-to-change-size-of-split-screen-emacs-windows/4988206
OR use C-x z to repeat command with z, by first execute command to be repeated: C-x { C-x z z z
To balance window: C-x +
Emacs to search keyword inside a directory:
Ref:
First, install search program: silver searcher
brew install the_silver_searcher
How to use:
ag --cpp -l exit : only output filename (.c, .cpp, ...) recursively from current dir that contains exit (ignoring case)
ag exit : search recursively under current directory, the string exit.
ag exit /search/under/this/directory
ag -z exit : search also zipped file
ag --cc string == ag -G '\.(c|h|xs)$' string , --cpp for cpp files, --cc for c files
ag --help ; ag --list-file-types (--cc, --cpp, etc.)
ag --hidden string : also search hidden files
ag -u string : search recursively ALL files under current directory
ag -l : list ALL files which are the search target ( ag --cpp -l )
ag -L string : list ALL files which are NOT containing string
ag --cpp -l | wc -l (101); ag --cpp -l exit | wc -l (7) ; ag --cpp -L exit | wc -l (94)
ag -g main : list file which filename includes the string main ; ag -g make
ag -l -0 exit | xargs -0 wc -l : list file that includes string exit, then count line number of each file
ag -l -G '\.py$' numpy : list file which filename ends with py and the file contains numpy
Interactive completion/candidate (for example during C-x C-f [tab] for showing file candidate to open, or M-x [tab] for showing command candidate) in Emacs can be done with 4 package:
ido (default one, pre-installed package)
helm package
counsel (based on ivy) package
anything package
We now move from ido to counsel (ivy):
Remove ido-related from .emacs
M-x package-install counsel (latest package), counsel-tramp (latest), docker-tramp(latest), smex (latest), ivy-prescient (lates), counsel-world-clock, ivy-hydra, ivy-rich (ivy and swiper will be installed as dependency)
Remove ibuffer from .emacs. Also remove icomplete-mode, completion-mode. Also remove buff-sel. Now C-xb is done by ivy.
Counsel issue with C-- pop-to-mark command:
Undo the remapping by counsel.el of pop-to-mark-command to counsel-mark-ring:
(define-key counsel-mode-map [remap pop-to-mark-command] nil)
Check with M-x describe-key (C--), describe-binding (C--), or describe-variables (counsel-mode-map)
Usage:
M-x : by default ^ is shown, which means search from start of string. Delete the current to enable search NOT from start of string.
M-x describe-key can be searched with M-x ^des key$
Meta-Control-r (C-M-r) : recent file
During C-x C-b which is remapped to counsel-ibuffer, the C-x k can kill the buffer.
M-y : yank. And can be used to see what is in the kill-ring
M-x counsel-mark-ring : ^cou ma ri : can be used to see where the mark is (mark ring)
swiper:
M-s M-s : swiper-thing-at-point
M-s s : swiper
M-s a : swiper-all (swiper on all open buffer)
C-M-z : counsel-fzf
C-M-f : counsel-ag counsel-cd (change root directory of ag search)
Issue: in Message buffer: counsel--async-sentinel: Wrong number of arguments
Solution: upgrade counsel from 0.13.0 (melpa-stable) to 20200311 (in melpa)
To change the search root directory of counsel-ag: C-u C-M-f
C-M-f then M-n will ag search the string at the current cursor.
counsel-tramp
Installed: 20190616 (in melpa, NOT melpa-stable)
Usage:
Start: M-x counsel-tramp (the entries in ~/.ssh/config are shown (ROOT ALSO).
Stop: M-x counsel-tramp-quit
Can be used for ssh, root, docker
If tramp hung up: M-x tramp-cleanup-all-buffers
SEARCH RELATED:
C-i : anzu - isearch forward
C-I (shift i) : anzu - isearch backward
M-% : anzu - query replace
M-s M-s : swiper-thing-at-point
M-s s : swiper
M-s a : swiper-all (search by swiper all-buffer) BUT ONLY FOR OPENED BUFFER.
C-M-r : counsel-recentf (used to be isearch-backward-regexp) (Meta - Control - r)
C-M-s : isearch-forward-regexp
C-M-f : ag (with current file directory OR the .git top-most directory as root search directory)
C-M-f then M-n (or Alt+n in Mac): ag search the string at the current cursor
C-M-f then M-p (or Alt+p in Mac): ag search the previous search key
C-u C-M-f : ag with custom root search directory
...
HOW TO Evaluate expression (function) in emacs:
For variable, use M-x describe-variable
First method: goto *scratch* buffer, write (display-graphic-p) Ctrl-j
Second method: M-x eval expression (or M-:), write (display-graphic-p) RET
How to use counsel-ag for refactoring:
https://sam217pa.github.io/2016/09/11/nuclear-power-editing-via-ivy-and-ag/
http://irreal.org/blog/?p=5530
Package : wgrep (latest)
Refactor/rename variable/class in all files.
1. Use counsel-ag, C-M-f (or C-u C-M-f) to search string to be renamed.
2. In the search result, press C-c o (ivy-occur) to open ivy-occur buffer.
3. Switch to ivy-occur buffer and press C-x C-q
(ivy-wgrep-change-to-wgrep-mode) wgrep: writable grep buffer.
Press C-x C-s when finished or C-c C-k to abort changes.
4. Rename, using M-% or iedit, C-c ;
5. C-c C-c (wgrep-finish-edit)
Press C-x C-s when finished or C-c C-k to abort changes.
;; C/C++ compile with C-c C-c will save the files.
Emacs for diff 2 files: ediff (M-x counsel-find-library ediff)
M-x counsel-find-library ediff. M-x ediff OR M-x ediff-buffer (for tramp)
Emacs c++ ide (pending due to too much software to install):
emacs + company-irony (Need to build irony server, a lot of trouble)
http://tuhdo.github.io/c-ide.html
Follow advice of company-clang written here, with .dir-locals.el as:
((c++-mode . ((company-clang-arguments . ("-I./")))))
EmacsのC++開発環境を整理する (Need to build irony server, a lot of trouble. But this site seems to be the most informative)
Must install rtag also.
Emacs as a C++ IDE Atila Neves - Youtube (This needs irony server too...)
cmake-ide + bear (create compile database for non-cmake, i.e., for good-old Makefile)
* HyperSwitch: http://pc-karuma.net/mac-app-hyperswitch/
* BetterTouchTool:
* How to use jq:
To see all the JSON docker inspect image_name/network_name | jq '.'
If the output of jq '.' is array [ element0, element1 ], then can do: docker inspect bridge | jq '.[0]', docker inspect bridge | jq '.[0].IPAM.Config'
Filter with pipe: docker inspect bridge | jq '.[] | .IPAM.Config'
docker inspect bridge | jq '.. | objects | to_entries[] | select(.key=="Gateway")'
docker inspect bridge | jq '.. | objects | to_entries[] | select(.key | contains("IP"))'
docker inspect ivansetiawantky/get-started:part2 | jq '.. | objects | to_entries[] | select(.key | contains("Labels"))'
Samba over ssh http://d.hatena.ne.jp/hkobayash/20081111/1226382526
Local forwarding (forward to local):
localの8081番にアクセスするとremote (externalからアクセス可) からアクセスできるtarget:80に繋げる例
ssh -nNT -L 10084:privadr.kaisha.co.jp:80 username@sshserv.kaisha.co.jp &
bind localhost:10084 port to privadr.kaisha.co.jp:80 (cannot be accessed from outside) which can be accessed from sshserv.kaisha.co.jp.
sshserv.kaisha.co.jp can be accessed from outside.
To kill the above:
fg, to bring back backgrounded to foreground process then Ctrl-c
ps auxww | grep ssh, then kill -KILL ps number
Multistage ssh, 多段 ssh
http://togakushi.bitbucket.org/build/html/OpenSSH_AdventCalendar2014/04.html
.ssh/config as below. Login with "ssh serv2.ext"
Host serv2.ext
User usname2
HostName serv2.kaisha.com
ProxyCommand ssh usname1@serv1.kaisha.com -W %h:%p
http://stackoverflow.com/questions/9139417/how-to-scp-with-a-second-remote-host
FIRST: ssh -L 54321:target.kaisha.co.jp:22 user@proxy.kaisha.co.jp
SECOND: scp -P 54321 filelocal.iso userintarget@localhost:~/
scp -P 54321 -p userintarget@localhost:~/v10b/\*.wav .
-p to preserve timestamp
Above 2 commands can be packed in 1:
http://serverfault.com/questions/37629/how-do-i-do-multihop-scp-transfers
scp -p -o ProxyCommand="ssh userproxy@proxy.kaisha.com nc target.kaisha.com 22" usertarget@target.kaisha.com:~/tmp/\*.wav .
Or use with -W (maybe "nc" can leave "user@notty" lingering?):
scp -p -o ProxyCommand="ssh -W target.kaisha.com:22 userproxy@proxy.kaisha.com " usertarget@target.kaisha.com:~/tmp/\*.wav .
It is better to use rsync because rsync can resume.
Contents of ~localuser/.ssh/config:
Host *
ServerAliveInterval 240
TCPKeepAlive yes
Host target.ext
ControlMaster auto
ControlPath ~/.ssh/mux-%r@%h:%p
ControlPersist 3
User targetuser
HostName target.kaisha.co.jp
ProxyCommand ssh proxyuser@proxy.kaisha.co.jp -W %h:%p
rsync -ahcvp target.ext:/home/anyuser/file*.txt .
For accessing NAS Network Attached Storage) via smb (samba) of a vm: rsync -ahcvp target.ext:/smb/nasname/public/anyuser/file*.txt .
scp -pr target.ext:/home/anyuser/file*.txt .
git clone etc., can also be used using target.ext
git clone target.ext:/home/git/
For example, host A and B are in the same LAN network.
Host A (192.168.0.12) has a local forwarded port to target.kaisha.co.jp:8000.
We want to allow remote host B to connect to local forwarded port of host A.
In host A: ssh -gL 0.0.0.0:80020:target.kaisha.co.jp:8000 user@proxy.kaisha.co.jp
In host B: access with http://192.168.0.12:80020
Suppose that a (git https://ingit.kaisha.co.jp) service is only provided to LAN internal machine. AND a web-proxy (squid) server is servicing.
To access the internal git http service, first we can forward the web-proxy port to a local port.
Then the user can use the service, by specifying the local port as the http_proxy.
First method (not recommended)
ssh user@sshproxy.kaisha.co.jp -NL 13128:webproxy.kaisha.co.jp:3128 &
export http_proxy=http://localhost:13128
export https_proxy=http://localhost:13128
git clone https://ingit.kaisha.co.jp/dir/repo.git
Then edit the local configuration .git/config [http_proxy] and [https_proxy]
Second method (use this)
ssh user@sshproxy.kaisha.co.jp -NL 13128:webproxy.kaisha.co.jp:3128 &
mkdir repo
cd repo
git init
git config --local http.proxy http://localhost:13128
git config --local https.proxy https://localhost:13128
git remote add origin https://ingit.kaisha.co.jp/dir/repo.git
git fetch
git checkout master
less .git/config
git remote show origin
git pull --rebase --all
If we set the network to use port forwarded proxy as the web proxy, then we can access the web server intended to internal machine only. Below is using command line, but behave the same with Setting -> Preference -> Proxy
ssh user@sshproxy.kaisha.co.jp -NL 13128:webproxy.kaisha.co.jp:3128 &
networksetup -setwebproxy Wi-Fi localhost 13128
networksetup -setsecurewebproxy Wi-Fi localhost 13128
networksetup -setwebproxystate Wi-Fi On (DONT forget the state)
networksetup -setsecurewebproxystate Wi-Fi On
By this the internal service https://ingit.kaisha.co.jp can be accessed.
To deactivate the webproxy:
networksetup -setwebproxystate Wi-Fi Off
networksetup -setsecurewebproxystate Wi-Fi Off
To check the webproxy current status:
networksetup -getwebproxy Wi-Fi
networksetup -getsecurewebproxy Wi-Fi
ssh -L 30445:smb1.kaisha.co.jp:445 -L 31445:smb2.kaisha.co.jp:445 username@sshserv.kaisha.co.jp
mount_smbfs //GUEST@localhost:30445/home ~/smb1/home
mount_smbfs //GUEST@localhost:31445/home ~/smb2/home
mount
umount ~/smb1/home ~/smb2/home
mount_smbfs //GUEST@smb1.kaisha.co.jp/home ~/smb1/home <==== without SSH (so, for accessing from internal network)
mount_smbfs //GUEST@smb2.kaisha.co.jp:445/home ~/smb2/home <==== without SSH (so, for accessing from internal network)
Without port forwarding, i.e., accessing from internal network
http://qiita.com/xxthermidorxx/items/bb148530a55a4e55d99b
sshfs user1@fsserv1.kaisha.co.jp:/home/user1 ~/fsserv1_user1/
umount ~/fsserv1_user1
With port forwarding, i.e., accessing from external network
http://superuser.com/questions/139023/how-to-mount-remote-sshfs-via-intermediate-machine-tunneling
ssh -L 30022:fsserv1.kaisha.co.jp:22 user2@sshserv2.kaisha.co.jp
sshfs -p 30022 user1@localhost:/home/user1 ~/fsserv1_user1/
Or, just like rsync: sshfs target.ext:/smb/goro/Pub ~/mntpnt; umount ~/mntpnt
The "public" MUST BE specified to enable mounting.
1. From inside LAN:
mount_afp afp://5ro/Public ~/mntpnt
mount_afp afp://6ro/Public ~/mntpnt2
mount_smbfs smb://bent/public ~/mntpnt
2. From outside network:
First do SSH tunneling. In tunauto.sh, add this entry:
["afp56ro"]="20024 0.0.0.0:26548:5ro.arc:548 0.0.0.0:27548:6ro.arc:548"
["bent"]="20026 0.0.0.0:26139:bent.arc:139”
tunauto.sh o afp56pro
To mount 5ro: mount_afp afp://localhost:26548/Public ~/mntpnt
To mount 6ro: mount_afp afp://localhost:27548/Public ~/mntpnt2
tunauto.sh o bent
mount_smbfs smb://localhost:26139/public ~/mntpnt
umount ~/mntpnt
tunauto.sh c afp56pro
tunauto.sh c bent
Or, do the tunneling and mount with one script:
tunafp.sh 5ro Public ~/mntpnt o(pen)/c(lose)
tunafp.sh 6ro Public ~/mntpnt2 o(pen)/c(lose)
tunsmbfs.sh bent public ~/mntpnt o(pen)/c(lose)
From internal network:
Using SmartSVN repository browser: http://quo.kaisha.co.jp:8080/project/raidrep
From external network:
ssh -L 28080:quo.kaisha.co.jp:8080 user1@sshserv1.kaisha.co.jp
Using SmartSVN repository browser: http://localhost:28080/project/raidrep
Move from bitbucket to company's repository server:
https://help.github.com/articles/importing-a-git-repository-using-the-command-line/
Only the git extension file is moved. Which location of URL is shown in the clone http area.
Move from git to svn:
The repository is something like http://user@localhost:22222/project/repo1
The project is put directly below the above directory, so http://user@localhost:22222/project/repo1/proj1
Below proj1 directory, mkdir 3 directories: trunk, branches, tags.
Put directories and files below http://user@localhost:22222/project/repo1/proj1/trunk. So below trunk we see dir1, dir2, etc.
Check out the trunk, and we get a local working directory proj1/dir1, proj1/dir2, etc.
For standard svn's trunk, branches, tags usage, see:
svn info, svn log, svn status
How to move from git to svn. Seems can keep syncing after import from git to svn:
http://stackoverflow.com/questions/661018/pushing-an-existing-git-repository-to-svn
svn ignore (must set EDITOR env to "vi"):
http://stackoverflow.com/questions/86049/how-do-i-ignore-files-in-subversion
To check recursively ignored files/directories: svn propget -R svn:ignore . (don't forget the dot)
To ignore globally below a directory: svn propset svn:global-ignores '*.o' . (don't forget the dot)
svn propedit svn:global-ignores . (Recursive from . or current directory)
svn propget -R svn:global-ignores .
To ignore certain files/directories:
https://stackoverflow.com/questions/13865354/ignore-multiple-specific-files-with-svn
svn propset svn:ignore "idtxt2ph"$'\n'"idtxt2ph-dbg"$'\n'"Build" . (don't forget the . directory)
Check with: svn propget svn:ignore
OR: svn propedit svn:ignore .
Use lightning cable and QuickTime:
Connect iPhone to Mac with lightning cable to mac's USB (iPhone must trust the mac)
Start QuickTime. Click File -> New Movie Recording. Select iPhone from pulldown arrow to the right of the record button.
Types of 64 bit machine: ILP64 (Integer, Long, Pointer is 64), LP64 (Long, Pointer is 64), LLP64 (Long Long, Pointer is 64):
ILP64: sizeof(int) = 8, sizeof(long) = 8, sizeof(long long) = 8, sizeof(*) = 8
LP64: sizeof(int) = 4, sizeof(long) = 8, sizeof(long long) = 8, sizeof(*) = 8
LLP64: sizeof(int) = 4, sizeof(long) = 4, sizeof(long long) = 8, sizeof(*) = 8
See: http://stackoverflow.com/questions/384502/what-is-the-bit-size-of-long-on-64-bit-windows
It is impossible to make long as 4bytes in 64bit machine, as an 64bit executable:
http://stackoverflow.com/questions/12794603/making-long-4-bytes-in-gcc-on-a-64-bit-linux-machine
To port exe and binary database created in 32bit machine, compile the binary as 32bit in the 64bit machine:
(?) -mabi=ms https://gcc.gnu.org/onlinedocs/gcc-6.3.0/gcc/
In the makefile, put 2 below environment variables:
CXX = /usr/local/bin/g++-6
CXXFLAGS = -m32 -Wall
file a.out: to check 32bit or 64bit.
-m32 option:
http://stackoverflow.com/questions/2426478/when-should-m32-option-of-gcc-be-used
Compile below with g++ -m32 or g++ -m64:
#include <iostream>
int main( int argc, char* argv[] ) {
using namespace std;
cerr << "sizeof(char) = " << sizeof(char) << endl;
cerr << "sizeof(short) = " << sizeof(short) << endl;
cerr << "sizeof(size_t) = " << sizeof(size_t) << endl;
cerr << "sizeof(int) = " << sizeof(int) << endl;
cerr << "sizeof(long) = " << sizeof(long) << endl;
cerr << "sizeof(long long) = " << sizeof(long long) << endl;
cerr << "sizeof(float) = " << sizeof(float) << endl;
cerr << "sizeof(double) = " << sizeof(double) << endl;
cerr << "sizeof(char*) = " << sizeof(char*) << endl;
cerr << "sizeof(long*) = " << sizeof(long*) << endl;
return 0;
}
LP64
Load Transcription. Transcription format is, "start end label" each line. Must be sorted.
HTK Transcription: http://www.ee.columbia.edu/ln/LabROSA/doc/HTKBook21/node82.html
The mark / label is incrementally sorted, and the start/end time is 7digit filled:
http://www.linuxnix.com/awk-scripting-8-awk-printf-statements-examples/
awk '{printf "%.7d %.7d %s\n", $1*1e7, $2*1e7, $3}' fr010690.lab > htk.lab
000000 2000000 pau
2000000 3700000 s
3700000 4800000 a
Application to view md / markdown format text file.
alias opmd='open -a macdown'
Can also convert markdown to pdf.
But, to convert markdown to pdf, use:
In emacs, open the markdown file
C-c C-c p : preview the in chrome. Then print from chrome.
After iOS 11, it seems that text file in Dropbox is editable from iOS.
But, the text file must be in UTF-8.
Use emacs to change the buffer coding: M-x set-buffer-file-coding-system. (utf-8)
Check available locale: locale -a
Check current locale: locale
Temporarily change LANG: LANG=ja_JP.eucJP less temp.txt
In iTerm2 preference -> profiles, add also EUC-JP profile. This is to enable inputting マルチバイト multibyte in the command line. Terminal emulation -> character encoding: EUC-JP.
In the text tab: Treat ambiguous-width characters as double-width.
The point is: Terminal (iTerm2) emulation character encoding == file encoding.
gsed -e 's/(助詞)@@/@@/; s/(副詞)@@/@@/' euc-file.txt
If it is NOT possible to change the terminal emulation character encoding, then first convert the file encoding to the terminal encoding (UTF-8), then do the gsed with the terminal encoding (i.e., all processing is done in UTF), finally convert again to the file original encoding (EUC):
iconv -f EUC-JP -t UTF-8 temp.dic4 > tmp.utf
gsed -e 's/(助詞)@@/@@/' tmp.utf > res.utf
iconv -f UTF-8 -t EUC-JP res.utf > result1
Can also test using grep etc.
Tool to manage VM start stop etc.
https://qiita.com/tsunemiso/items/d184366b8926bd5a8d00
brew cask install virtualbox (already done with using dmg)
brew cask install vagrant
mkdir ~/vagrant (OR mkdir ~/nlp to know what this VM is for)
cd ~/vagrant
Check the box to be installed: https://app.vagrantup.com/boxes/search, pick one OS then see the New tab.
In ~/vagrant directory do below:
vagrant init ubuntu/xenial64: Vagrantfile is created in the current directory (Vagrantfile is exchanged by developers to reproduce the development environment)
Show available box: vagrant box list
Start the virtual machine (VM) of the box, with: vagrant up (for first time, will take long time to download)
Use VirtualBox Manager to confirm that the box is running. Or do vagrant status
Vagrant VM image is inside the ~/VirtualBox_VMs directory.
To suspend the VM: vagrant suspend
To login to the VM: vagrant ssh
In the VM, do ifconfig -a, to check the IP address of the VM. Seems has 2 interfaces: 10.0.2.15 (private address like 192.168.0.0) and 127.0.0.1 (loopback) (I think vagrant provide the IP address)
exit
To stop the VM: vagrant halt
To destroy the VM: vagrant destroy
VM will be destroyed. Vagrant's VM image in ~/VirtualBox_VMs is destroyed also.
To remove the box: vagrant box remove ubuntu/xenial64 (<- from vagrant box list)
vagrant box list will show nothing
Reference:
http://www.streamwave.com/systems-administration/how-to-extend-your-virtualbox-virtual-hard-drive/2/
https://www.howtogeek.com/312456/how-to-convert-between-fixed-and-dynamic-disks-in-virtualbox/
https://docs.oracle.com/cd/E97728_01/E97727/html/vboxmanage-modifyvdi.html
https://superuser.com/questions/1406115/how-to-shrink-virtualbox-vdi-for-hfs-guest-os
http://ricardolovelace.com/how-to-shrink-a-dynamic-virtualbox-image.html
Actually, attempt to shrink the storage (fixed) but seems shrinking is NOT supported. So, first I clone the fixed HDD to Dynamic HDD (at least it can shrink to the amount of the information stored in the disk image), then compact the dynamic image disk.
VBoxManager list hdds
cd /Volumes/... (big capacity hdd)
Clone the fixed disk to dynamic disk (the default ) VBoxManage clonehd ~/VirtualBox_VMs/Ubuntu16.04LTSDesktop/Ubuntu16.04LTSDesktop.vdi (big capacity hdd/)Ubuntu16.04LTSDesktopDyn.vdi -variant Standard
VBoxManage showhdinfo Ubuntu16.04LTSDesktopDyn.vdi (Format variant: dynamic default, Capacity: 30G, Size on disk: 28G)
Then, using the VirtualBox GUI, remove (detach) the fixed Ubuntu16.04LTSDesktop.vdi from the VM, AND add (attach) the dynamic Ubuntu16.04LTSDesktopDyn.vdi to the VM.
Check that the VM is working as usual.
If OK, then use Virtual Media Manager (仮想メディアマネージャ) tools to delete the fixed virtual storage Ubuntu16.04LTSDesktop.vdi, AND MOVE the dynamic virtual storage Ubuntu16.04LTSDesktopDyn.vdi from big capacity hdd to ~/VirtualBox_VMs ...
Check again with VBoxManager list hdds
Optimization:
Compacting/shrink the size-on-disk of a dynamic storage: http://ricardolovelace.com/how-to-shrink-a-dynamic-virtualbox-image.html
VBoxManage modifyhd /path/to/vdi --compact
Defrag ubuntu: https://thelinuxcode.com/defragment-hard-drive-ubuntu/
--compact again, and see the size-on-disk with showhdinfo.
Ansible is インフラ構成管理ツール
In the VM started by vagrant, ansible is played to for example guarantee the installation of web server etc.
So, after the VM is started, the web server is automatically set by ansible
https://www.quora.com/What-is-the-difference-between-Ansible-and-Docker
Ansible is a build and orchestration tool. It can be used to deploy Docker containers certainly, but is very often used to build AWS cloud VMs, deploy code, install software, create users, etc. <= So, maybe for creating development environment Vagrant + Ansible is better.
Statistical Machine Translation with Moses: https://qiita.com/R-Yoshi/items/9a809c0a03e02874fabb#no4
Behold seems Vagrant + Docker is better?!?!?!
Yes, use Docker
Docker is provisioning tool which means プロセスを実行するためのツール
Same with ansible, Docker is used above vagrant to guarantee the installation of web server etc.
Required process is started by this docker on top of VM started by vagrant.
Docker (プロセスを実行するツール) does NOT require Guest OS. It is a container working directly on Host OS, no Guest OS needed as in the case of using Virtual Machine.
Because Docker was a container that was working with Linux, then to use it on Mac first we need to install hypervisor (VirtualBox), then install Virtual Machine (VM), then install Linux. Finally install Docker
So, it was Mac + (VirtualBox + VM + Linux + Docker) + Docker's Container
VirtualBox + VM + Linux + Docker can be done with Vagrant + Docker by Vagrant's provisioner
So, it was, alternatively, Mac + (Vagrant + Docker started by Vagrant's provisioner) + Docker's container.
Now, Docker for Mac is doing the jobs of (VirtualBox + VM + Linux + Docker), so now it is:
Mac + (Docker for Mac) + Docker's Container
Remember, Docker is プロセスを実行するツール !!!!
brew cask info docker
brew cask install docker
From https://qiita.com/hidekuro/items/fc12344d36d996198e96
FROM ubunty:trusty
MAINTAINER ivansetiawantky
CMD ["/bin/bash"]
From "DevOps 導入指南"
Create account in Docker Hub (ivansetiawantky)
In Linux VM the Docker Engine is started with "systemctl start docker.service", but in Docker Desktop (installed with brew cask install docker), just launch the application.
As in next image, Docker Engine (server, daemon) must be active first before starting container (App1, Bins/Libs) etc.
Check whether the docker engine (daemon) is alive or not with docker version, NOT docker --version
docker login / docker logout to Docker Hub. Can be done from Docker Desktop (installed with brew cask install docker) also.
Using Docker:
docker search ivansetiawantky (Output format: [DockerHubID]/[RepositoryName])
docker search centos
docker pull centos
docker images
docker run -td --name ivcentos centos
t is assigning pseudo tty, d is background, i is interactive
To run container in detached (d) form, with interactive (i), with TTY (t):
docker run -dit --name mycentos ivansetiawantky:testcentos:2.0 [default /bin/bash]
Because it is in detached form, need to attach to the container with:
docker container attach mycentos
To detach, use Control + p + q
Do not issue exit command, because this will STOP the container (see with docker container ls -a)
If you happen to exit, do docker start mycentos, then attach again.
docker stop mycentos
docker rm mycentos to remove the container (NOT the image)
docker ps (docker ps -a MUST be clean)
To check container process which is active in foreground: docker ps -a
Command is /bin/bash or nginx -g "daemon off"
To execute command in the container:
docker exec ivcentos cat /etc/redhat-release
docker exec ivcentos uname -a
To check version of OS: cat /etc/os-release
To "enter" the container: docker exec -it ivcentos /bin/bash
Get out with exit
docker stop ivcentos
docker start ivcentos
docker rm [-f] ivcentos
docker cp ~/.gitconfig 3af16d732454(container ID from docker ps -a):/home/smtdev/.gitconfig
House keeping
https://linuxize.com/post/how-to-remove-docker-images-containers-volumes-and-networks/
docker system prune
docker run --privileged --pid=host docker/desktop-reclaim-space
docker container ls -a
docker container rm ivcentos
docker images (docker image ls -a)
docker image rm centos
?
Using a lot of containers:
docker pull ubuntu:latest
Using container which includes middleware (nginx Web server):
docker pull nginx:latest
docker run -d -p 28000:80 --name ivnginx nginx:latest
The "COMMAND" after docker ps -a is: "nginx -g "daemon off""
The "PORTS" after docker ps -a is "0.0.0.0:28000->80/tcp", which means access from machines inside LAN to port 28000 of the Docker Engine server machine will be forwarded to port 80 of the nginx Container.
docker logs -f ivnginx
asdf The above command is to show stdout, stderr of the container. Container's stdout, stderr will show nginx access log, because in the Dockerfile of nginx, access log is forwarded to stdout/err.
# forward request and error logs to docker log collector
RUN ln -sf /dev/stdout /var/log/nginx/access.log && ln -sf /dev/stderr /var/log/nginx/error.log
Sharing Docker environment with Dockerfile:
Dockerfile docs: https://docs.docker.com/v18.09/develop/develop-images/dockerfile_best-practices/#run
Below is example of RUN layer/command with cache busting and version pinning:
RUN apt-get update && apt-get install -y \
aufs-tools \
ruby1.9.1 \
ruby1.9.1-dev \
s3cmd=1.1.* \
package-foo=1.3.* \
&& rm -rf /var/lib/apt/lists/*
Dockerfile は、image を作るためのソースファイルみたいなもの。
Dockerfile にインストールすべきOSや、その上でインストールすべきプログラムなどを記述する。
その後、Dockerfile を docker build [--no-cache]でビルドして、image が出来上がる。
この image を container の中で run する。
For example:
Dockerfile to build CentOS image is here: https://hub.docker.com/_/centos
Click the "latest" link.
Dockerfile to build nginx image is here: https://hub.docker.com/_/nginx
Click the "latest" link. BTW, above links can be reached by searching "centos" and "nginx" in hub.docker.com, respectively.
Dockerfile の中のキーワード: FROM, MAINTAINER (or LABEL maintainer="NGINX Docker Maintainers <docker-maint@nginx.com>"), ADD, LABEL, RUN, CMD (CMD は、コンテナーに foreground プロセスとして動かすデフォルトのコマンドとなる。(container に foreground プロセスは必ず一つある。それがないとコンテナーは動かない・動かす意味がない))
mkdir-p ~/dockfile/test1; cd ~/dockfile/test1
echo "Hello, Docker" > hello_docker.txt
vi Dockerfile
FROM centos:latest
ADD hello_docker.txt /tmp
RUN yum install -y epel-release
CMD ["/bin/bash"]
docker build [--no-cache] -t ivansetiawantky/testcentos:1.0 .
docker images は ivansetiawantky/testcentos TAG 1.0 を示す
docker run -td --name mycentos ivansetiawantky/testcentos:1.0
docker exec -it mycentos cat /tmp/hello_docker.txt
これで、Dockerfile が testcentos:1.0 の image を作ることになっている。
testcentos:1.0 をベースに、nginx を追加して、それを testcentos:2.0 とすることにするとします。
docker exec -it mycentos /bin/bash
rpm -qa | grep epel
yum install -y nginx
exit
それから、nginx がインストールされたこのコンテナー(mycentos)を新しい image として保存する(It is better to stop the container first before doing commit, but by default docker will pause before doing commit: https://stackoverflow.com/questions/34868116/should-i-stop-a-container-before-commit-it):
docker stop mycentos
docker container ls -a (confirm it is exited)
docker commit mycentos ivansetiawantky/testcentos:2.0
docker images でチェックすると、ivansetiawantky/testcentos:1.0 とivansetiawantky/testcentos:2.0 の両方とも存在する。(ただ、Dockerfile は ivansetiawantky/testcentos:1.0 のソースとなります。2.0 の方はソースがない!)
Do below to push the testcentos images (Tag 1.0 and 2.0) to Docker Hub:
docker push ivansetiawantky/testcentos
By this both tag 1.0 and 2.0 is push-ed to Docker Hub. If want to push 2.0 only, then
docker push ivansetiawantky/testcentos:2.0
After waiting for sometime (the push-ed repository MUST have a description in order to be listed in docker search), try docker search ivansetiawantky
Above is the way to create image to be used in the container, and how to refine/tune the image.
Dockerfile can also be provided with the image. But to do this, need to set the automatic build:
https://forums.docker.com/t/how-to-upload-my-dockerfile-to-docker-hub/6563
Docker Compose (NOT TRIED)
Private Docker Hub? 1 private repository is available.
From Get Started with Docker Desktop for Mac (installed with brew cask install docker) https://docs.docker.com/docker-for-mac/
In CentOS
yum install -y iproute
ip addr show
ip route
yum install -y traceroute
Install shell completion:
ln -s /Application... ~/.bash-completion.d/docker (do for docker, docker-machine, docker-compose)
In .bashrc, source them Now, using brew install bash-completion, so NO need to source anymore.
Because problem with: bash ltrim_colon_completions not found when doing "docker rmi [tab]"
Docker Documentation https://docs.docker.com/
See from the Get Started https://docs.docker.com/get-started/ to learn how to create Dockerfile etc.
This is THE RECOMMENDED DOCUMENTATION
History of distributed app:
STACK (top)
SERVICE (middle)
CONTAINER (bottom)
An object is an instance of a class. A container is a running-instance of an image.
The major difference between a container and an image is the top writable layer. All writes to the container that add new or modify existing data are stored in this writable layer. When the container is deleted, the writable layer is also deleted. The underlying image remains unchanged.
Container-layer: thin R/W layer on top of the image: https://docs.docker.com/v18.09/storage/storagedriver/
docker --version, docker version, docker info
docker run hello-world
docker images -a, docker image ls -a
docker ps -a, docker container ls -a
docker stop,start,rm container-name VS docker rmi image-name
Running the Flask app.py (part2, CONTAINER):
Run the container with "docker run -p 4000:80 friendlyhello" NOT with detached (-d) or background mode, in order to get also the STDERR log output.
Execute "docker images -a" to get the Container ID (the container's hostname) and the Container Name (randomly set, for example keen_poincare)
Login with "docker exec -it keen_poincare /bin/bash"
docker tag friendlyhellp[image_name] ivansetiawantky/get-started:part2[username/repository:tag]
docker push ivansetiawantky/get-started:part2
docker rmi ivansetiawantky/get-started:part2
docker run -p(ublish) 4000:80 ivansetiawantky/get-started:part2 (will pull from docker hub)
get-started part3 (SERVICE):
Refering to micro service. Here, the flask service above is load-balance with 5 replicas. The MOST TOP service is what so called STACK.
Here is the docker-compose.yml (put anywhere), with myweb as the service name, and myweb's containers to share port 80 via a load-balanced network called mywebnet. mywebnet is defined with the default setting (which is a load-balanced overlay network):
version: "3"
services:
myweb:
# replace username/repo:tag with your name and image details
image: username/repo:tag
deploy:
replicas: 5
resources:
limits:
cpus: "0.1"
memory: 50M
restart_policy:
condition: on-failure
ports:
- "4000:80"
networks:
- mywebnet
networks:
mywebnet:
docker swarm init
Deploy the stack (i.e., stack-of-services (also the load-balancer)):
docker stack deploy -c docker-compose.yml getstartedlab
Network name: getstartedlab_mywebnet
Service name: getstartedlab_myweb
Get service ID, service name: docker service ls = docker stack services getstartedlab
See the task (a single container running a service) ID: docker service ps getstartedlab_myweb
docker ps -a, docker ps -q, docker container ls -a, docker container ls -q
See all stack-of-service process: docker stack ps getstartedlab
Access http://localhost:4000 and see the Container ID (hostname) is changing.
To scale to 8 replicas, just edit the docker-compose.yml and do again:
docker stack deploy -c docker-compose.yml getstartedlab
docker inspect getstartedlab_myweb.7.ayz4c0ce2t66cphb0sx6jwl4c
Take down the stack (app) and swarm:
docker stack rm getstartedlab[service name]
docker swarm leave --force
docker ps -a
service
get-started part4 (SWARM):
Objective: deploy application/stack on multiple-machine (a cluster).
Multi-container (in part 3), multi-machine applications are made possible by joining multiple machines into a "Dockerized" cluster called a swarm. The steps are:
1. Create 2 virtual machines (VM) using docker-machine
2. Create swarm with 2 nodes using the above 2 VMs. 1 VM as the swarm manager, 1 VM as swarm worker
3. Deploy stack (orchestrated services, networks, etc.) on swarm manager
In this part, 2 VMs created by docker-machine using VirtualBox, are managed as a swarm.
docker-machine create --driver virtualbox myvm1 (here, the iso image of the OS will be downloaded. The image is here: ~/.docker/machine/cache/boot2docker.iso)
docker-machine create --driver virtualbox myvm2
In Oracle VM VirtualBox Manager, myvm1 and myvm2 will be shown.
To check the IP address of the virtual machine: docker-machine ls
There are 2 ways to execute command in Docker Engine in myvm1 or myvm2:
docker-machine ssh myvm1 "cat /etc/os-release" (virtual machine is running, BUT still NO container in the myvm1)
OR run the last line of docker-machine env myvm1, i.e., eval $(docker-machine env myvm1)
The shell with this environment will talk to myvm1.
Do before and after eval above: printenv | grep DOCKER
By this, docker client command will be executed by docker engine in myvm1.
docker-machine ls will show that myvm1 is active.
To unset the environment variable: eval $(docker-machine env -u)
The ssh will be able to execute command in the VM even when there is no container docker-engine in the VM. On the contrary, the environment var setting will work only when there is container docker-engine there. (If using boot2docker.iso image, then docker engine will exist inside docker machine myvm1)
Good thing with the shell environment variable is local file, local command (such as jq) in the local host can accessed and used inside myvm1.
docker version where eval $(docker-machine env myvm1) is NOT done will show the docker client of the docker host, and the docker engine of the docker host.
docker version where eval $(docker-machine env myvm1) is DONE will show the docker client of the docker host, and the docker engine of the myvm1 VM.
docker-machine ssh myvm1 "docker version" will show the docker client of the myvm1 VM and docker engine of the myvm1 VM.
Set VM myvm1 as the swarm manager:
Using docker-machine ls, check the IP address or myvm1 (e.g.: 192.168.99.100)
Execute docker swarm init INSIDE myvm1:
docker-machine ssh myvm1 "docker swarm init --advertise-addr 192.168.99.100"
Next message is printed out: To add a worker to this swarm ...:
docker swarm join --token <long_token> 192.168.99.100:2377
Add VM myvm2 as a worker to the swarm:
Execute docker swarm join INSIDE myvm2:
docker-machine ssh myvm2 "docker swarm join --token <long_token> 192.168.99.100:2377"
A swarm with 2 nodes, has been created. To view the nodes in the swarm, execute docker node ls INSIDE the swarm manager:
docker-machine ssh myvm1 "docker node ls"
To leave the swarm:
Worker: docker-machine ssh myvm2 "docker swarm leave"
Manager: docker-machine ssh myvm1 "docker swarm leave --force"
Deploy the app ON swarm manager:
Connect docker client to talk to myvm1 by setting the shell environment: eval $(docker-machine env myvm1)
By the above shell environment variable setting, the docker engine to talk to is set to myvm1. So, instead of doing [ docker-machine ssh myvm1 "docker node ls" ], just do "docker node ls" and the docker engine to ask for is the myvm1 (see printenv | grep DOCKER). Check also with docker-machine ls.
All below docker client will ask to myvm1 now:
docker stack deploy -c docker-compose.yml getstartedlab
ONLY ON SWARM MANAGER: docker stack ps getstartedlab (5 containers)
docker ps -a (2 containers)
eval $(docker-machine env myvm2); docker ps -a (3 containers)
curl http://192.168.99.101:4000 will show 5 container ID even though they are split between 2 machines.
Cleanup and reboot:
docker stack rm getstartedlab
eval $(docker-machine env -u)
docker-machine ssh myvm2 "docker swarm leave"
docker-machine ssh myvm1 "docker swarm leave --force"
docker-machine stop $(docker-machine ls -q)
docker-machine rm $(docker-machine ls -q)
The ~/.docker/machine will be removed.
To start again the docker machine after localhost is shutdown, do:
docker-machine ls
docker-machine start myvm1
get-started part4-b (SWARM with mac/windows physical machine)
2019/10/9 Seems multi-node with physical machine in Docker for Mac/Windows are NOT supported: https://docs.docker.com/engine/swarm/swarm-tutorial/
On Linux, seems multi-node with physical machine is OK: https://stackoverflow.com/questions/39844880/how-to-setup-multi-host-networking-with-docker-swarm-on-multiple-remote-machines
nmap -p 2377 localhost
get-started part4-c (SWARM with 2 VMs each on different 2 physical machine mac/windows)
https://medium.com/@thomas.mylab33/running-docker-swarm-on-two-macbooks-2029f310b2df
But seems complicated...
get-started part5 (STACK)
docker client command is: docker {stack, service, container, node} {ls, ps}
A stack is a group of interrelated (micro) services that share dependencies, and can be orchestrated and scaled together.
To check task of services, network, entries, etc., which are deployed (according to a docker compose file) on a stack: docker stack ps <stackname>
To list existing services: docker service ls
To list existing containers: docker container ls -a (docker ps -a)
Create 2 VMs with boot2docker.iso image, and check the IP: docker-machine create --driver virtualbox myvm1,2; docker-machine ls
The services, networks to be deployed and orchestrated are listed in the docker-compose.yml. The networks has 1 entry and services also has 1 entry. Add myvisualizer service to the services in docker-compose.yml, so now we have 2 services:
myvisualizer:
image: dockersamples/visualizer:stable
ports:
- "8080:8080"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
deploy:
placement:
constraints: [node.role == manager]
networks:
- mywebnet
myvisualizer service are constraint to be working on swarm manager only by the placement constraints, because we need to access the specific volume (file) which exists in the file system of the physical volume of the physical disk of the swarm manager VM's host (/var/run/docker.sock on the left). By this, the file accessed by the service keep existing even though the containers are created/destroyed.
Check the IP address of myvm1 (using docker-machine ls) to be set as swarm manager.
Set docker client to talk to myvm1: eval $(docker-machine env myvm1)
docker swarm init --advertiise-addr 192.168.99.103 (obtained from docker-machine ls)
Message to join as worker or manager are displayed
To show the token to join as worker or manager: docker swarm join-token -q (worker|manager)
Set myvm2 as swarm worker:
docker-machine ssh myvm2 join --token <long_token> 192.168.99.103:2377
Now we have set up the swarm! docker node ls
INSIDE / ON myvm1, deploy the stack (here, the shell env are already set to talk to the docker engine in myvm1 (eval $(docker-machine env myvm1)):
Deploy the stack and name it getstartedlab: docker stack deploy -c docker-compose.yml getstartedlab
1 network, getstartedlab_mywebnet is created
2 services, getstartedlab_myweb and getstartedlab_myvisualizer, are created.
Execute docker stack ps getstartedlab [stack_name] or access http:192.168.99.103:8080 to visualize the existing task (container). We 5 myweb services and 1 myvisualizer service, distributed to 2 VMs. myvm1 is servicing 2 myweb services and 1 myvisualizer service. myvm2 is servicing 3 myweb services.
Add another service, redis, which is counting the access:
redis: (DO NOT CHANGE TO myredis, because app.py is looking at redis and NOT myredis)
image: redis
ports:
- "6379:6379"
volumes:
- "/home/docker/data:/data"
deploy:
placement:
constraints: [node.role == manager]
command: redis-server --appendonly yes
networks:
- mywebnet
The access counter MUST persist, even though the container that servicing redis service is up and down (redeployed). So, the container must be created on swarm manager (placement constraint) in order to directly access the counter data in /home/docker/data of the myvm1. (INSIDE the VM myvm1, AND NOT the physical host that hosting VM myvm1). As long as the VM myvm1 is NOT stopped, although the container and the swarm is leaved, then the counter file data persists.
Create 2 VMs. Create docker swarm. Confirm with docker-machine ls and docker node ls.
Make directory /home/docker/data in swarm manager: docker-machine ssh myvm1 "mkdir ./data"
If myvm1 is STOPed, then when started it will start from the initial image so, must do mkdir again.
Later can do: docker-machine ssh myvm1 "ls -l /home/docker/data"
Also: docker-machine ssh myvm1 "cat /home/docker/data/appendonly.aof"
INSIDE myvm1, redeploy: docker stack deploy -c docker-compose.yml getstartedlab
(ONLY swarm manager) docker stack ps getstartedlab : 7 container task (5 for myweb, 1 myvisualizer, 1 redis)
(ONLY swarm manager) docker service ls : 3 services. 1 service with 1 replica for myvisualizer, 1 service with 1 replica for redis, 1 service with 5 replicas for myweb
To check which process/task is running the service myweb:
docker service ps getstartedlab_myweb
(ON swarm manager) docker container ls (-a) : 4 containers. 2 for myweb, 1 for redis, 1 for myvisualizer
(ON worker) docker-machine ssh myvm2 "docker container ls": 3 containers for myweb
OR eval $(docker-machine env myvm2); docker container ls
To deploy the machine to AWS for example, then need to open port for web, redis, visualizer.
ABOVE DEMO WORKS WITHOUT DOCKER-ENGINE (DAEMON) IN THE PHYSICAL MAC MACHINE, BECAUSE THE TARGET DOCKER-ENGINE IS IN THE DOCKER-MACHINE OF MYVM1 AND MYVM2.
By setting the shell environment, docker client in physical mac will talk to the docker-daemon (docker-engine) in the docker-machine myvm1 or myvm2, instead of the physical mac machine.
To share file between the host and the container:
docker run -it -v ~/Desktop:/Desktop image_name /bin/bash
Or, do in 2 steps: first run the container (image is ivansetiawantky/testcentos:2.0, container name is set to mycentos. /Users must be permitted to be shared with docker (see docker preference)), then exec bin/bash:
docker run -td --name mycentos -v /Users/ivs/dockfile:/dockfile ivansetiawantky/testcentos:2.0
docker exec -it mycentos /bin/bash
Then from inside mycentos container, can do vi /dockfile/fromdocker, etc
docker stop mycentos; docker container rm mycentos
docker container ls -a
See:
How to write Dockerfile:
Add LABEL. Check the label with docker inspect ivansetiawantky/get-started:part2 | jq '.[0].Config.Labels'
# Use an official Python runtime as a parent image
FROM python:2.7-slim
# Check metadata of image with docker inspect image | jq '.[0].Config.Labels'
LABEL description="This is an example Dockerfile for get-started. \
From Docker Documentation get-started in https://docs.docker.com/get-started/"
LABEL maintainer="Ivan Setiawan <myemail@gmail.com>"
LABEL vendor="IvanS.corp"
LABEL version="1.0"
# Set the working directory to /app
WORKDIR /app
# Copy the current directory contents into the container at /app
COPY . /app
# Install any needed packages specified in requirements.txt
RUN pip install --trusted-host pypi.python.org -r requirements.txt
# Make port 80 available to the world outside this container
EXPOSE 80
# Define environment variable
ENV NAME MyWorld
# Run app.py when the container launches
CMD ["python", "app.py"]
docker inspect ivansetiawantky/get-started:part2 | jq '.[0].Config.Env'
To check the environment variable of a running container: docker inspect runningcont | jq '.[].Config.Env'.
Network tutorial
In docker, container is virtualization of compute entity. Besides compute entity, volume (virtualization of storage), and network (virtualization of network) are also exist.
https://docs.docker.com/v18.09/network/
bridge: containers are connected using the default bridge type network driver (below I will only test with the bridge configuration)
host: container will have NO IP, instead the docker host IP is the container's IP itself. All ports opened in the docker host is pass through to the container. Container is NOT isolated.
overlay: network driver in a swarm
macvlan: maybe not used
https://docs.docker.com/v18.09/network/network-tutorial-standalone/
Use the default bridge network (for testing)
docker network ls
Network with name "bridge" with bridge driver is the default
Network with name "docker_gwbridge" with bridge driver will bridge the docker host machine with the container's network.
Run 2 containers. If no --network, then by default these containers will connect to the default "bridge".
docker run -dit --name alpine1 alpine ash
docker run -dit --name alpine2 alpine ash
docker container ls -a will show 2 containers.
Inspect the network bridge:
docker network inspect bridge | jq '.. | objects | to_entries[] | select(.key | contains("Gateway")) ' will show the IP address of the gateway between the Docker host and the bridge network.
docker network inspect bridge | jq '.. | objects | to_entries[] | select(.key | contains("Container")) will show the containers connected to this bridge networks along with their IP addresses. 2 containers.
Attach to alpine1 and check the IP address etc.
docker container attach alpine1
# ip addr show, ip route
# ping -c 2 {google.com, 172.17.0.3}
# ping -c 2 alpine2 will fail
Control + p + q to detach
docker stop $(docker container ls -q)
docker container rm $(docker container ls -a | tail -n+2 | awk '{print $NF}')
docker container ls -a will show nothing. Above tail -n+2 is show every lines starting from line 2.
Use user-defined bridge network (for production)
Or use (swarm for production)
With the user-defined bridge network, name resolution to IP is available.
Create alpine-net network with driver bridge:
docker network create --driver bridge alpine-net
docker network ls
docker network inspect bridge | jq '.. | objects | to_entries[] | select(.key | contains("Gateway")) ' will show 172.17.0.1 as the gateway of this network to the Docker host.
docker network inspect alpine-net | jq '.. | objects | to_entries[] | select(.key | contains("Gateway")) ' will show 172.19.0.1 as the gateway of this network to the Docker host.
Connect 4 containers, alpine1,2 connected to "alpine-net", alpine3 connected to "bridge", alpine4 connected to both:
docker run -dit --name alpine{1,2,4} --network alpine-net alpine ash
docker run -dit --name alpine3 alpine ash
docker network connect bridge alpine4 , will connect alpine4 to "bridge" network
docker container ls -a
docker network inspect alpine-net | jq '.. | objects | to_entries[] | select(.key | contains("Container")) ' , will show 3 containers alpine1,2,4
docker network inspect bridge | jq '.. | objects | to_entries[] | select(.key | contains("Container")) ' will show 2 containers alpine3,4
Attach to alpine1 and confirm that it can ping with name of alpine1,2,4. But cannot connect to alpine3 either using name or IP address.
docker container attach alpine1
# ping -c 2 alpine{1,2,4} is OK
# ping -c 2 {alpine3, 172.17.0.2} fail
Ctrl + p + q
Attach to alpine4 and confirm can connect to alpine1,2,4 with name, to alpine3 with IP address.
Stop and remove containers. Remove network: docker network rm alpine-net .
In the case of the service demo performed with myweb service, the network is mynetwok:
docker-machine {create, start} myvm1. Then eval $(docker-machine env myvm1), by this the execution is INSIDE myvm1 environment (INSIDE means, docker client will ask/send request to the docker engine inside the myvm1 VM)
INSIDE myvm1: docker swarm init --advertise-addr 192.168.99.105, then do docker-machine ssh myvm1 "docker network ls"
Name: ingress, driver: overlay, scope: swarm is created
Join myvm2 as worker, then docker node ls to check the swarm.
INSIDE myvm1 environment (INSIDE means, docker client will ask/send request to the docker engine inside the myvm1 VM): docker network inspect ingress | jq '.. | objects | to_entries[] | select(.key | contains("Peers")) ' , here do NOT use docker-machine ssh myvm1 "... jq ... ", because jq is NOT existing inside myvm1. The docker client in docker host physical machine will ask the docker engine inside myvm1, and the jq in docker client physical machine will analyze with jq (available in the docker client physical machine)
Will show the same info with docker node ls , when asking the docker client is asking the docker engine of myvm1.
Do, docker-machine ssh myvm1 "mkdir ./data", then ask docker machine myvm1 VM to docker stack deploy -c docker-compose.yml getstartedlab. Then check with docker stack ps getstartedlab .
Asking docker engine in myvm1 VM with docker network ls:
getstartedlab_mywebnet (overlay, swarm) is created (as instructed in docker-compose.yml)
docker network inspect getstartedlab_mywebnet | jq '.. | objects | to_entries[] | select(.key | contains("Container")) ' shows 4 containers (1 redis, 1 myvisualizer, 1 myweb, 1 mywebnet-endpoint). Compare with docker container ls -a .
eval $(docker-machine env myvm2) then docker network inspect getstartedlab_mywebnet | jq '.. | objects | to_entries[] | select(.key | contains("Container")) ' shows 3 containers (2 myweb, 1 mywebnet-endpoint)
See also Docker samples https://docs.docker.com/samples/
docker-compose.yml file sample: A sample docker-compose.yml to quickly run container and do initial jobs (install etc.).
# docker-compose -f ./docker-compose.yml up --build -d
# docker-compose -f ./docker-compose.yml logs -f (app|mysql)
# docker-compose -f ./docker-compose.yml down
# docker-compose -f ./docker-compose.yml down --volumes (TO REMOVE todo-mysql-data volume)
version: "3.8"
services:
app:
image: node:12-alpine
command: sh -c "apk --no-cache --virtual build-dependencies add python2 make g++ && yarn install && yarn run dev"
ports:
- 3000:3000
working_dir: /app
volumes:
- ./:/app
environment:
MYSQL_HOST: mysql
MYSQL_USER: root
MYSQL_PASSWORD: secret
MYSQL_DB: todos
mysql:
image: mysql:5.7
volumes:
- todo-mysql-data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: secret
MYSQL_DATABASE: todos
volumes:
todo-mysql-data:
Tutorial when signing in to Docker Hub (AND no repository is available yet)
git clone https://github.com/docker/doodle.git
cd doodle/cheers2019 && docker build -t ivansetiawantky/cheers2019 .
docker run -it --rm ivansetiawantky/cheers2019
docker login && docker push ivansetiawantky/cheers201
https://qiita.com/tifa2chan/items/e9aa408244687a63a0ae
To see also the intermediate images: docker images -a
To remove image by ID: docker rmi [image ID]
For orchestration, just like SWARM + Docker-compose in Docker.
Node: physical machine or VM. Kubelet inside node.
Master node, worker node.
Good to read:
https://ubiteku.oinker.me/2017/02/21/docker-and-kubernetes-intro/
Setting up Kubernetes: https://docs.docker.com/get-started/
Tutorial: https://kubernetes.io/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro/
As shown below, enable kubernetes from Docker preference/setting:
Simple kubernetes orchestration test. Save below to pod.yaml:
apiVersion: v1
kind: Pod
metadata:
name: demo
spec:
containers:
- name: testpod
image: alpine:3.5
command: ["ping", "8.8.8.8"]
Create pod: kubectl apply -f pod.yaml
Check pod is up and running: kubectl get pods, docker container ls -a
kubectl logs demo
kubectl delete -f pod.yaml
The above kubernetes service inside pod unit is equivalent with below docker service create:
docker swarm init
docker service create --name demo alpine:3.5 ping 8.8.8.8
(docker container ls -a)
docker service ps demo
docker service logs demo
docker service rm demo
(docker node ls)
docker swarm leave --force
(NOT YET) Play with Kubernetes Classroom: https://training.play-with-kubernetes.com/kubernetes-workshop/
(NOT YET) Coursera: https://www.coursera.org/learn/google-kubernetes-engine
(NOT YET) Convert docker compose file to kubernetes stack yml: https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/
Developing from containers, containerized development environment: Developing program from within containers. This is NOT a typical usage of Docker's container. See: https://dev.to/aghost7/developing-from-containers-42fp and https://github.com/AGhost-7/docker-dev/tree/master/tutorial and https://github.com/AGhost-7/docker-dev .
Point:
Use USER directive, so do not develop in ROOT
Sharing file/volume with docker host (for example library, dictionary etc.)
Try to construct develop env in here: https://qiita.com/R-Yoshi/items/9a809c0a03e02874fabb#no4
In case cannot do "sudo apt install vim" inside container, don't forget to do "sudo apt-get update" first: https://qiita.com/pochy9n/items/69ab8fc071c187a1f5f8
Dockerfile:
Dockerfile for SMT
# -*- coding: utf-8 -*-
# Modified by Ivan Setiawan
# 'Last modified: Sat Jan 25 08:46:27 2020.'
#
# Docker container create:
# docker run (--rm OR --restart=unless-stopped) -ti --name mysmtdev \
# -v $HOME/work/smtdevenv/sharedwks:/home/smtdev/sharedwks \
# -v $HOME/.ssh:/home/smtdev/.ssh \
# ivansetiawantky/smtdevenv:3.0 \
# byobu new
#
# Detach: Control p q
# Attach: docker container attach mysmtdev
#
# docker inspect -f "{{ .HostConfig.RestartPolicy.Name }}" mysmtdev
# docker update --restart={unless-stopped|always} mysmtdev
#
# SSH from inside docker to outside:
# ssh machine.ext -o ControlPath=/dev/shm/control:%h:%p:%r
# scp -o ControlPath=/dev/shm/control:%h:%p:%r remotevm.ext:/home/remoteuser/abc.tgz .
# rsync -ahcvp -e "ssh -o ControlPath=/dev/shm/control:%h:%p:%r" remotevm.ext:/home/remoteuser/170725.tgz .
# Add ssh-agent first: eval `ssh-agent` <= Do first!
FROM ubuntu:18.04
# Check metadata of image with docker inspect image | jq '.[0].Config.Labels'
LABEL description="Ubuntu-based development environment for Statistical Machine Translation (SMT) research."
LABEL reference1="Reference for SMT environment: https://qiita.com/R-Yoshi/items/9a809c0a03e02874fabb#no4"
LABEL reference2="Reference for Dockerfile for containerized dev env: https://dev.to/aghost7/developing-from-containers-42fp"
LABEL reference3="Detailed reference for Dockerfile for containerized dev env: https://github.com/AGhost-7/docker-dev/tree/master/tutorial"
LABEL maintainer="Ivan Setiawan <j.ivan.setiawan@gmail.com>"
LABEL vendor="Arcadia, Inc."
LABEL version="3.0"
ENV DOCKER_USER smtdev
ENV DEBIAN_FRONTEND noninteractive
ENV DEBCONF_NOWARNINGS yes
# Add required packages.
# Create user with passwordless sudo. This RUN is run as ROOT.
RUN apt-get update && apt-get install -y \
sudo \
build-essential \
git-core \
pkg-config \
automake \
libtool \
wget \
zlib1g-dev \
python-dev \
libbz2-dev \
bash-completion \
curl \
tmux \
byobu \
vim \
libboost-all-dev \
libcmph-dev \
openssh-client \
&& \
yes | sudo unminimize && \
adduser --disabled-password --gecos '' "$DOCKER_USER" && \
adduser "$DOCKER_USER" sudo && \
echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers && \
touch /home/$DOCKER_USER/.sudo_as_admin_successful && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
USER "$DOCKER_USER"
WORKDIR "/home/$DOCKER_USER"
# COPY ./dot.vimrc /tmp/dot.vimrc
COPY ./dot.* /tmp/
# Below command must be run by the $DOCKER_USER, so put after USER is defined.
# In case byobu-ctrl-a still cannot work, then put it inside ~/.profile
RUN cat /tmp/dot.bashrc-append >> /home/$DOCKER_USER/.bashrc && \
# echo 'set -o noclobber' >> /home/$DOCKER_USER/.bashrc && \
echo '2' | byobu-ctrl-a && \
cat /tmp/dot.vimrc > /home/$DOCKER_USER/.vimrc && \
cat /tmp/dot.dircolors > /home/$DOCKER_USER/.dircolors && \
cat /tmp/dot.svn-prompt.sh > /home/$DOCKER_USER/.svn-prompt.sh && \
cat /tmp/dot.git-prompt.sh > /home/$DOCKER_USER/.git-prompt.sh && \
cat /tmp/dot.gitconfig > /home/$DOCKER_USER/.gitconfig && \
sudo rm /tmp/dot.* && \
#
# Below prepare container local directory for mosesdecoder and clone it.
# Put everything unique to the container directly below ~/localwks.
mkdir -p /home/$DOCKER_USER/localwks/mosesdecoder && \
git clone https://github.com/moses-smt/mosesdecoder.git \
/home/$DOCKER_USER/localwks/mosesdecoder && \
# Download sample-models here, to reduce cd by WORKDIR...
curl -L -o /home/$DOCKER_USER/localwks/sample-models.tgz \
http://www.statmt.org/moses/download/sample-models.tgz && \
tar xzf /home/$DOCKER_USER/localwks/sample-models.tgz \
-C /home/$DOCKER_USER/localwks && \
#
# Compile mosesdecoder. RELEASE-4.0
# Switch directory (cd) to container local directory for mosesdecoder
cd /home/$DOCKER_USER/localwks/mosesdecoder && \
git checkout RELEASE-4.0 && \
./bjam --with-cmph=/usr/lib/x86_64-linux-gnu && \
#
# Test the compilation of mosesdecoder. MUST BE RUN IN sample-models dir.
cd /home/$DOCKER_USER/localwks/sample-models && \
/home/$DOCKER_USER/localwks/mosesdecoder/bin/moses \
-f phrase-model/moses.ini < phrase-model/in > out
# Check /home/$DOCKER_USER/localwks/sample-models/out !
# Go back to home directory! <== NOT needed!
# WORKDIR "/home/$DOCKER_USER" <== NOT needed!
# The final WORKDIR (or /, if WORKDIR not used at all) is the pwd
# when entering container.
CMD ["/bin/bash" ]
Build: docker build --no-cache -t ivansetiawantky/smt-env:1.0 . (Don't forget the . )
To run container: docker run --rm -ti -v $HOME/dockfile/smt/sharedwks:/home/smtdev/sharedwks ivansetiawantky/smt-env:1.0 /bin/bash (Here the container will be removed if exited. By this, we cannot commit the current container to the image after exit!). sharedwks is shared workspace between docker container and docker host.
In order to commit container (run with --rm) to image, first Control + p + q to detach from the container (do NOT exit).
Then docker commit container_name_xeno image_name. The container_name_xeno is random.
To run container with byobu: (Multiple -v https://stackoverflow.com/questions/18861834/mounting-multiple-volumes-on-a-docker-container)
docker run --rm -ti --name mysmtdev \
-v $HOME/dockfile/smt/sharedwks:/home/smtdev/sharedwks \
-v $HOME/.ssh:/home/smtdev/.ssh \
ivansetiawantky/smt-env:2.0 \
byobu new
Inside byobu: Press F2 to create new terminal session. F3, F4 to move between terminal session.
Inside byobu: Press Shift + F2 to split pane horizontally. Press Ctrl+F2 OR F12 (escape), then press % to split pane vertically. Shift + Arrow to move between pane.
Press Control + P + Q to detach from container (byobu session). DO NOT PRESS F6.
docker container attach mysmtdev to attach to container and revoke session.
To use ssh from inside docker:
Add IgnoreUnknown UseKeychain in ~/.ssh/config
Command is: ssh machine.ext -o ControlPath=/dev/shm/control:%h:%p:%r
scp -o ControlPath=/dev/shm/control:%h:%p:%r remotevm.ext:/home/remoteuser/abc.tgz .
rsync -ahcvp -e "ssh -o ControlPath=/dev/shm/control:%h:%p:%r" remotevm.ext:/home/remoteuser/170725.tgz .
Add ssh-agent first: eval `ssh-agent` <= Do first!
Might need to do: ssh-add ~/.ssh/id_rsa
File ownership problem when mounting volume of host to container
-v volume mount has NO ownership problem in mac docker.
However, there is problem in the case of Ubuntu:
If the user id of general user in container does NOT match the user id (id -u) of the host, there will be disaster...
dockerでvolumeをマウントしたときのファイルのowner問題
Solution: docker run -it --rm -u 1002:1002 -v `pwd`:/workdir ubuntu:18.04 bash
OR: docker run -it -v $(pwd)/temp:/temp busybox
Other solution: docker run -it -v /etc/group:/etc/group:ro -v /etc/passwd:/etc/passwd:ro -u $(id -u $USER):$(id -g $USER) ubuntu bash
Will not work in mac, because /etc cannot be mount in mac.
If container already has general (non-root) user, then see the link above. Basically, change the UID (user id), GID (group id) of the non-root user inside the container, to the UID, GID of the file owner (the docker-run's process owner) in the host:
Dockerfile:
Text Box
ARG FROM_IMAGE_NAME=nvcr.io/nvidia/pytorch:20.01-py3
FROM ${FROM_IMAGE_NAME}
ENV DOCKER_USER ivdock
# This RUN is RUN-ed as root
# RUN apt-get update && apt-get install -y \
RUN apt-get update && apt-get install -y \
sudo \
gosu \
&& \
yes | sudo unminimize && \
adduser --disabled-password --gecos '' "$DOCKER_USER" && \
adduser "$DOCKER_USER" sudo && \
echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers && \
touch /home/$DOCKER_USER/.sudo_as_admin_successful \
&& \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
# Below is RUN-ed as $DOCKER_USER
USER "$DOCKER_USER"
RUN mkdir -p /home/$DOCKER_USER/workspace/tacotron2
ADD . /home/$DOCKER_USER/workspace/tacotron2
WORKDIR /home/$DOCKER_USER/workspace/tacotron2
USER "root"
RUN pip install --no-cache-dir -r requirements.txt
COPY entrypoint.sh /usr/local/bin/entrypoint.sh
RUN chmod +x /usr/local/bin/entrypoint.sh
ENTRYPOINT ["/usr/local/bin/entrypoint.sh"]
entrypoint.sh:
Text Box
#!/bin/bash
OS_TYPE=${PHYS_HOST_OS:-Linux}
USER_ID=${PHYS_HOST_UID:-9001}
GROUP_ID=${PHYS_HOST_GID:-9001}
lowostype=$(echo "$OS_TYPE" | tr '[:upper:]' '[:lower:]')
OS_TYPE="$lowostype"
echo "Physical host os : $OS_TYPE"
USER_NAME="ivdock"
if [ "$OS_TYPE" == "darwin" ]
then
echo "Preserve container's UID:GID"
else
echo "Starting with UID : $USER_ID, GID: $GROUP_ID"
usermod -u $USER_ID -o $USER_NAME
groupmod -g $GROUP_ID $USER_NAME
fi
exec /usr/sbin/gosu $USER_NAME "$@"
build:
Text Box
#!/bin/bash
docker build . --rm -t tacotron2:ivdock
run:
Text Box
#!/bin/bash
nvidia-docker run --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 -it --rm --ipc=host -v $PWD:/home/ivdock/workspace/tacotron2/ -e PHYS_HOST_OS=$(uname -s) -e PHYS_HOST_UID=$(id -u $USER) -e PHYS_HOST_GID=$(id -g $USER) tacotron2:ivdock bash
A note about -ti option:
https://teratail.com/questions/19477
In docker host, input command w, then if tty for docker container console is connected to s003, try echo host2docker > /dev/ttys003 .
A note about WORKDIR.
https://christina04.hatenablog.com/entry/2014/10/31/101510
Use WORKDIR to change current working directory. OR.
RUN cd /path/to/directory && doexec <== MUST BE WRITTEN with &&
The cd is performed and doexec is executed with /path/to/directory as the present/current working directory.
To check that the compilation and installation of mosesdecoder succeed:
cd ~/localwks/sample-models
~/localwks/mosesdecoder/bin/moses -f phrase-model/moses.ini < phrase-model/in > out2
Check 1/3: Confirm that out and out2 is the same.
Need to compile Moses with CMPH (C Minimal Perfect Hashing) library:
sudo apt-get install libcmph-dev
https://ubuntu.pkgs.org/16.04/ubuntu-universe-amd64/libcmph-dev_2.0-2_amd64.deb.html
Recompile:
./bjam --clean
./bjam --with-cmph=/usr/lib/x86_64-linux-gnu
Check 2/3: If succeed, then ~/localwks/mosesdecoder/bin/{processLexicalTableMin,processPhraseTableMin} are generated.
cd ~/localwks; git clone https://github.com/moses-smt/giza-pp.git; cd giza-pp; make
Check 3/3 confirm that GIZA++-v2/GIZA++, GIZA++-v2/snt2cooc.out, mkcls-v2/mkcls, 3 files are generated. Then copy to mosed-decoder/tools directory:
echo $basedir
mkdir $basedir/mosesdecoder/tools
cp $basedir/giza-pp/GIZA++-v2/GIZA++ $basedir/giza-pp/GIZA++-v2/snt2cooc.out $basedir/giza-pp/mkcls-v2/mkcls $basedir/mosesdecoder/tools
Need to tell the training script where GIZA++ is located:
train-model.perl -external-bin-dir $basedir/mosesdecoder/tools
Should be put in localwks in the container's image (public domain) ====> smtdevenv:4.0 ?
Set local repository first:
mkdir dev-environment
cd dev-environment
git init
touch Dockerfile
vim Dockerfile
Follow this link: https://qiita.com/Brutus/items/19f02df409e859406914
Set e-mail address etc. In Dockerhub, "Configure Automated Builds", create new rules, master branch, docker tag as needed (2.0), activate autobuild.
To tag with annotation: git tag -a 2.1 -m "2.1 noclobber for smtdevenv:2.1"
To show tag information: git show 2.1, git show 2.0
To check what tag is existing: git tag
To push tag information to origin: git push origin 2.0
To push all tag information to origin: git push origin --tags
To checkout tag: git checkout tag_name
container は、理想的に使い捨てで、用が済んだら消える、です。container で色々と処理をして、その結果、つまりデータをホストのどこかに保存してから、container が消えます。
container はアプリケーションという考え方で、必要な時に動かして、そのアプリケーションが利用するデータはホスト(もしくはどこかのデータサーバー)から取り込んで、処理して結果を返す、みたいな考え方です。
ですので、container のライフサイクルとデータのライフサイクルが違って、container を永続的に使いたいならそれを image としてcommit します(run && commit)。下記のページの説明は良いですhttps://qiita.com/chroju/items/ce9cae248cc016745c66
container の restart ポリシー:
つまり、コンテナーを以下のコマンドで run するとき、そのコンテナーが stop されなければ、docker daemon の再起動で、container も再起動されます。
docker run --restart=unless-stopped -ti --name mysmtdev -v $HOME/arcadia/smtdevenv/sharedwks:/home/smtdev/sharedwks ivansetiawantky/smtdevenv:2.3 byobu new
ちなみに、コンテナーの restart policy を調べるには:
docker inspect -f "{{ .HostConfig.RestartPolicy.Name }}” mysmtdev
です。no であれば、docker daemon が死んだらコンテナーが消えます。unless-stopped でしたら、stop されなければ、docker daemon の再起動で、コンテナーも起動されます。
また、--restart は default で no ですので、docker run の時に --restart の指定がない場合は、docker daemon の停止で、このコンテナーが再起動されず、消えます。docker update で、コンテナーの restart policy を変更できます。
例えば、あるコンテナーが:
docker run -ti --name mysmtdev -v $HOME/arcadia/smtdevenv/sharedwks:/home/smtdev/sharedwks ivansetiawantky/smtdevenv:2.3 byobu new
で起動したとします。
このコンテナーの restart policy を変更するには:
exit か Control p q で、コンテナーから detach
docker update --restart=unless-stopped mysmtdev
docker inspect -f "{{ .HostConfig.RestartPolicy.Name }}” mysmtdev でコンテナーの restart policy を確認
Catalina's default login shell is zsh.
https://qiita.com/AirBeans5956/items/6a00443c6118d7d3f5f4
brew install lesspipe (also install in emacs M-x package-install anzu)
brew install {zsh, zsh-completion}
zsh
git clone --recursive https://github.com/sorin-ionescu/prezto.git "${ZDOTDIR:-$HOME}/.zprezto"
setopt EXTENDED_GLOB
for rcfile in "${ZDOTDIR:-$HOME}"/.zprezto/runcoms/^README.md(.N); do
for> ln -s "$rcfile" "${ZDOTDIR:-$HOME}/.${rcfile:t}"
for> done
echo 'fpath=(/usr/local/share/zsh-completions $fpath)' >> .zshrc
echo 'autoload -U compinit' >> .zshrc
echo 'compinit -u' >> .zshrc
May need to do: rm -f ~/.zcompdump; compinit
vi /etc/shells, append /usr/local/bin/zsh
chsh -s /usr/local/bin/zsh (if using /bin/zsh instead of /usr/local/bin/zsh, then error during mv completion) (previously /bin/bash. Check with printenv SHELL)
mkdir .myzsh_completion.d
~ ❯❯❯ ln -s /Applications/Docker.app/Contents/Resources/etc/docker.zsh-completion ~/.myzsh_completion.d/_docker
~ ❯❯❯ ln -s /Applications/Docker.app/Contents/Resources/etc/docker-machine.zsh-completion ~/.myzsh_completion.d/_docker-machine
~ ❯❯❯ ln -s /Applications/Docker.app/Contents/Resources/etc/docker-compose.zsh-completion ~/.myzsh_completion.d/_docker-compose
Append to .zshrc:
if [[ -d "${ZDOTDIR:-$HOME}/.myzsh_completion.d" ]]; then
fpath=("${ZDOTDIR:-$HOME}/.myzsh_completion.d" $fpath)
fi
[ -d /usr/local/share/zsh-completions ] && fpath=(/usr/local/share/zsh-completions $fpath)
autoload -U compinit
compinit -u
kubectl completion zsh > ~/.myzsh_completion.d/_kubectl
Some tips:
To alternate between directories, press 1
To check directory in stack, press dirs -v
Press cd +1, or 2, 3, to change to directory in stack.
DO NOT SETOPT NO_PROMPT_CR: no carriage return before prompt. (echo -n problem)
https://stackoverflow.com/questions/9072397/zsh-unexpected-output-from-awk-printf
awk printf %s problem in zsh (because % is expanded).
Compare execution in bash and zsh of the command:
printf "foo\nbar\n" | awk '{printf "%s\n", $1}'
Both will output the same if setopt no_prompt_percent! But, the problem is actually in preexec() where $1 is used. See https://bbs.archlinux.org/viewtopic.php?id=107834
Solution: http://izawa.hatenablog.jp/entry/2012/09/18/220106
To check the function completion path: print -l $fpath
If function completion not working, try:
chsh -s /usr/local/bin/zsh
execute /usr/local/bin/zsh then svn [TAB]
git, svn prompt:
To check:
Ctrl-a, Ctrl-u, etc.
for loop, process substitution <(cmd)
Ctrl-r/s for isearch command history
completion git, svn, docker, kubectl
prompt git, svn
printenv LANG (ja_JP.UTF-8) can TAB complete the env variable name also
umask 022
noclobber
emacs alias
prevent ^d from exit shell (ignore_eof)
setopt / unsetopt
zsh completion for play and rec is NOT good, because it is completing play framework and redis-cli framework. To avoid this, do print -l $fpath | xargs ls -l | grep _play , then remove the _play file, _redis-cli. Then rm -rf ~/.zcompcache, rm -f ~/.zcompdump, rm -f ~/.zcompdump.zwc, then restart shell.
Setting in Ubuntu-server's side:
https://blog.amedama.jp/entry/2018/08/04/230601
Inside the Ubuntu server, install the X Window system only. Do not need to install Ubuntu Desktop:
sudo apt-get -y install xserver-xorg <- command to install X Window system
sudo apt-get install x11-apps <- command to install xeyes
https://qiita.com/loftkun/items/37340745f211ea5d7ece
Enable X11 forwarding in the sshd:
$ sudo vim /etc/ssh/sshd_config <= Enable configuration
X11Forwarding yes
X11DisplayOffset 10
X11UseLocalhost no
$ sudo systemctl restart sshd <= restart sshd
Setting in the MacOS Mojave side:
https://qiita.com/loftkun/items/37340745f211ea5d7ece
brew cask install xquartz <= brew cask info xquartz
echo $DISPLAY <= must show something like /private/tmp/com.apple.launchd.qeR8i7Bf4G/org.macosforge.xquartz:0. If not, logout or reboot.
ssh -XY vm-myubuntu.ext, then execute xeyes for example.
Or, add below in ~/.ssh/config, so no need for -XY:
$ vim ~/.ssh/config
Host * (or Host vm-myubuntu.ext)
ForwardX11 yes
ForwardX11Trusted yes
Good to go!
https://www.hiroom2.com/2018/05/06/ubuntu-1804-xfce-ja/
sudo apt-get install xubuntu-desktop
sudo reboot
In the vmware console, show the VM console (use proxy). Now, disable the desktop login to text login.
First, press Ctrl-Alt-F1 to get to tty1 (https://askubuntu.com/questions/292069/switching-between-gui-and-terminal)
Second, set lightdm as default windows manager (https://askubuntu.com/questions/139491/how-to-change-from-gdm-to-lightdm)
sudo apt-get install lightdm
sudo dpkg-reconfigure lightdm
sudo reboot
Disable x-login, enable text login (https://askubuntu.com/questions/16371/how-do-i-disable-x-at-boot-time-so-that-the-system-boots-in-text-mode)
sudo nano /etc/default/grub (GRUB_CMDLINE_LINUX_DEFAULT="text")
sudo update-grub
sudo systemctl enable multi-user.target --force
sudo systemctl set-default multi-user.target
Some errors:
From VMWARE (ESXi), then start the console of vm. If topology error appears, then maybe need to set the video card to be auto-detect.
Now, console does not accept keyboard input...
Looks like need to open console in new browser window, and then focus/full screen-ed this window...
https://qiita.com/shotasano/items/0cf255a7ad6ec0cf52d7 (gnome)
https://www.hiroom2.com/2018/05/07/ubuntu-1804-xrdp-xfce-ja/#sec-4
https://qiita.com/yuji38kwmt/items/2e376df643e3bc24aa54 (Changing directory name ~/.config/user-dirs.dirs)
https://linuxfan.info/ubuntu-18-04-change-ja-font (Changing font)
In Ubuntu, vncserver:
sudo apt-get install vnc4server
vncpasswd
vncserver -geometry 1440x900 :1
First do: conda config --set auto_activate_base false OR conda deactivate base . NO python/conda virtual environment when activate vncserver!!!!
ps auxww | grep -i x
vncserver -kill :1(Dont forget to do this when vncserver is NOT needed anymore)
vi ~/.vnc/xstartup, add
unset SESSION_MANAGER
unset DBUS_SESSION_BUS_ADDRESS
startxfce4 &
vncserver :1
In MacOS, vncclient:
システム環境設定、共有から、画面共有をON <= 不要
5901 is the port of vncserver (that is, 590 followed with display number, which is 1, from :1)
ssh -L 55901:vm-target.kaisha.or.jp:5901 user@proxy.kaisha.or.jp
Open MacOS Finder, press Command+K, then input vnc://localhost:55901
OR: add the app to dock http://osxdaily.com/2013/04/05/vnc-client-mac-os-x-screen-sharing/
OR: from terminal: open vnc://localhost:55901
OR: tunvnc.sh (ssh + open vnc)
Problem:
Full display?
vncserver -geometry 1440x900 :1 (where 1440x900 is the full size of window desired)
Emacs Control is M?
This is due to interaction with Karabiner-Elements. In "complex modifications", exclude "com.apple.ScreenSharing"!!!
ssh server's only allowed login from machine which has its public key (id_rsa.pub) listed in authorized_keys. So, the ssh client's must set the IdentityFile to its private key (id_rsa).
When doing multi hop ssh, private key can be passed along using -A (ssh -A: ForwardAgent)
host0$ ssh -A user1@host1 -p 50022
host1$ ssh -A user2@host2 -p 50022
host2$ ssh user@internalhost
See https://zenn.dev/kariya_mitsuru/articles/ed76b4b27ac0fc
Write ForwardAgent yes in ~/.ssh/config, or use -A option in ssh command line.