Unix
http://www-128.ibm.com/developerworks/ru/linux/
http://www.ibm.com/developerworks/linux/
http://www.ibm.com/developerworks/aix/library/au-badunixhabits.html
http://www.ibm.com/developerworks/views/aix/libraryview.jsp?search_by=Speaking+UNIX+Part
http://people.redhat.com/drepper Unix Expert
ldd -v myprogram - shows shared libraries
http://aymanh.com/how-debug-bash-scripts debuging bash scripts: bash -x
http://www.linuxfromscratch.org/blfs/view/6.3/postlfs/profile.html
The shell program /bin/bash uses a collection of startup files to help create an environment. Each file has a specific use and may affect login and interactive environments differently. The files in the /etc directory generally provide global settings. If an equivalent file exists in your home directory it may override the global settings.
An interactive login shell is started after a successful login, using /bin/login, by reading the /etc/passwd file. This shell invocation normally reads /etc/profile and its private equivalent ~/.bash_profile upon startup.
An interactive non-login shell is normally started at the command-line using a shell program (e.g., [prompt]$/bin/bash) or by the /bin/su command. An interactive non-login shell is also started with a terminal program such as xterm or konsole from within a graphical environment. This type of shell invocation normally copies the parent environment and then reads the user's ~/.bashrc file for additional startup configuration instructions.
A non-interactive shell is usually present when a shell script is running. It is non-interactive because it is processing a script and not waiting for user input between commands. For these shell invocations, only the environment inherited from the parent shell is used.
.*profile files are sourced once when you log in
.*rc file is sourced every time you start a shell.
we use the . (dot) command to source it, which means to run it in the current shell context.
export EDITOR=vim
export PS1="\u@\H:\w > " #prompt (H - hostname) (u-user name) (w-working directory)
alias ls='ls -F --color=auto'
alias vi=vim
# Enable programmable completion features.
if [ -f /etc/bash_completion ]; then
source /etc/bash_completion
fi
# Make grep more user friendly by highlighting matches
# and exclude grepping through .svn folders.
alias grep='grep --color=auto --exclude-dir=\.svn'
alias findgrep='find . -type f | xargs grep -I -H -n --color=always'
PATH=$PATH:~/usr/bin
export PATH
# Add custom compiled libraries to library search path.
LD_LIBRARY_PATH=$LD_LIBRARY_PATH:~/usr/lib
export LD_LIBRARY_PATH
# Add custom compiled libraries to library run path.
LD_RUN_PATH=$LD_RUN_PATH:~/usr/lib
export LD_RUN_PATH
# Java's CLASSPATH customization
CLASSPATH=$CLASSPATH:~/foo/bar.jar
export CLASSPATH
source ~/.profile
source ~/.bashrc
http://www.ibm.com/developerworks/linux/library/l-tip-prompt/
At the bash prompt, you can use the default readline keybindings, these are similar to Emacs ones. Many of these are also available within other programs that use readline, such as the Python interpreter.
Here are some useful ones:
Ctrl-A Beginning of Line
Ctrl-E End of Line
Ctrl-U Kill (cut) everything left of cursor
Ctrl-K Kill (cut) everything right of cursor
Ctrl-W Kill (cut) the single word before the cursor
Ctrl-Y Yank (paste) the text back
Ctrl-L Clear Screen
Ctrl-D Exit
CTRL+R Search the history. We already talked about this.
CTRL+L Clears the screen. Use this instead of the clear command.
CTRL+D Use this instead of the exit command.
CTRL+C Kill whatever is running
CTRL+Z Puts whatever is running into a suspended background process. Use fg to restore it.
The shell expands shell variables that are between double quotes, but expansion is not done when single quotes
Иногда возникает необходимость залогить все действия совершаемые в терминале. для этого очень удобно использовать утилиту script.
Просто запускаем её в терминале и работаем дальше как ни в чём небывало. в конце сессии набираем exit и ... вся сессия сохранена в файл.
script [-a] [-c COMMAND] [-f] [-q] [-t] [file]
http://www.ibm.com/developerworks/aix/library/au-spunix_pipeviewer/index.html
http://www.ibm.com/developerworks/linux/library/l-lpic1-v3-103-4/index.html?S_TACT=105AGX03&S_CMP=HP
stdin is the standard input stream. It has file descriptor 0.
stdout is the standard output stream. It has file descriptor 1.
stderr is the standard error stream. It has file descriptor 2.
To ignore either standard output or standard error entirely: redirect the appropriate stream to the empty file, /dev/null. ls x* 2>/dev/null
here-document is another form of input redirection
df -h
df -k
du -sh
-i for "don't prompt before filetransfers"
-n for "don't ask for user/password"
ftp -n $HOSTNAME << EOF // in this case do not need open $HOSTNAME
ftp -n -i<< EOF
open $HOSTNAME
user $username $password
binary
get $filename $basefile
prompt // this disables yes/no for next mput command (the ssame as ftp-i ???)
mput *.tar
quit
EOF
find /var/tomcat -mmin -3 -print #(prints out all the files modified under /var/tomcat in the last 3 minutes)
REF=.tmp.$$
touch -t $(date +%m%d)0630 $REF # today at 630 AM
find . -newer $REF -exec cp {} $TARGETDIR \;
rm -f $REF
Put this in the beginning of the script, it will capture all output
and errors and your script will stay readable!
$exec >>/tmp/logfile 2>&1
Problem in detecting the script full name can arise when you launch your script with a relative path instead of the full one. For example, by issuing
../../my_script
the result of "dirname $0" will be
../..
echo -n "Script directory: "
echo $(dirname "$0")
dirname `which $0`
echo -n "Script name: "
echo $(basename "$0")
echo "working folder:" `pwd`
LOGFILE=$0.log
echo &date >> "$LOGFILE"
1. Implementing a lockfile
Every once in a while we’ve got a shell script that needs to run, but dangerous things can happen if we’re running two copies at the same time. The simplest method is to just bail out if a lockfile exists, but you can also implement blocking. Here’s a basic example of checking for a lock file and then bailing out:
#!/bin/sh
LOCKFILE=/tmp/mylock
if [ ! –s $LOCKFILE ]; then
echo $$ > $LOCKFILE
# do stuff; the bulk of the script is here
: > $LOCKFILE
exit 0
else
echo “PID `cat $LOCKFILE` running” | mailx –s “$0 can’t run” root
exit 1
fi
Of course you can get much fancier. If this was inside a while loop, you could sleep a few seconds and then retry the lock. The above example uses ‘test’ with the bracket notation to check if the lockfile exists, and that it isn’t empty (the -s option to test). If the return value of test is true, the block of code runs and puts the script’s current PID into the lockfile. At the end of the “do stuff” block of code, which will probably call another script, we truncate the lockfile by using the null command. If the lockfile is non-empty, we send mail to root and bail out with an unsuccessful return code. The subject of the message includes the name of the script ($0), and the body of the message indicates which process ID is currently running.
2. Check return values, always
If you call an external program, or another script of your own, you must always check the return value of that program. Unexpected failures of commands can cause the rest of your script to misbehave, so you need to make sure everything ran properly. The bourne shell has a built-in variable, $?, that holds the return value of the last command. See item number three for an example.
3. Using return codes
Remember, the ‘if’ statement uses the return value of the statement immediately following ‘if’ to determine whether or not it should succeed or fail. So the test command can be used (with bracket notation), and so can any other command.
if [ “$?” -eq ‘0’ ]; then echo yay ; fi
The above statement will print “yay” if the last command executed returned success, else it does nothing. Remember that ‘0’ in the shell is success; it’s the opposite of when you’re programming in C. We also collapsed everything onto a single line here. The parser doesn’t see it that way, though, since the semi-colon represents a newline.
You can also put commands inside if statements. Let’s test to see if a list of machines ping before we try to scp the known_hosts file to them (Solaris ping syntax):
#!/bin/sh
for host in `cat ./hostlist.txt`
do
echo "doing $host.."
if ping $host 1; then
scp /etc/ssh/ssh_known_hosts2 $host:/etc/ssh
fi
done
4. Capturing output from multiple commands
Frequently, people who want to capture output from multiple commands in a script, will end up appending output to the same file. You can use a subshell instead, and redirect all stdout (or stderr if you need to) from the subshell:
#!/bin/sh
(
cat /etc/motd
cat /etc/issue
) > /tmp/motd-issue
5. Other subshell tricks
Changing directories, and then executing a command, can be very useful when you’re piping stdout over ssh. It’s useful for local commands too. Using tar to copy the present directory into /tmp/test/:
tar cf - . | (cd /tmp/test && tar xpf -)
Note that if the cd fails (directory doesn’t exist), then tar will not be executed.
Parallel execution:
In our previous scp example, it would have run very slowly with a long list of hosts. We can make each scp command execute almost in parallel, by backgrounding the subshell process. The following portion of the script will burn through the loop very quickly, because the subshell is backgrounded, and therefore the commands run and the shell doesn’t wait for them to complete before looping again.
if ping $host 1; then
(
scp /etc/ssh/ssh_known_hosts2 $host:/etc/ssh
scp /etc/ssh/ssh_known_hosts2 $host:/etc/ssh
) &
fi
Imagine if you had a bunch of things like this:
zcat file-01.txt.gz | sort | uniq -c | gzip > file-01.sorted.txt.gz
zcat file-02.txt.gz | sort | uniq -c | gzip > file-02.sorted.txt.gz
(NB: that could have been generated and stored in a file or directly on the commandline using the 'for' builtin).
So, imagine there are 100 lines of that
Well, assuming the commands are all in a file called "cmd.txt" you can run:
# cat cmd.txt | xargs -d "\n" -n1 -P4 bash -c
Which will spawn 4 bash processes at a time (-P4), and each one will get one line from the input (-n1 and -d "\n") and pass that to “bash -c” which will execute it.
Voila. Instead of just doing one command at a time, you are now doing 4, and it runs in a quarter the time!
if [ -d $1 ]; then # eсли указан параметр
DIR=$1; # то берем его за директорю
else # иначе
DIR=$(pwd); # директорию в которой сейчас
fi;
OUTPUT=$DIR/output.pdf # /путь/имя_файла для вывода готового PDF
cd $DIR; # переходи в директорию
for i in *.{PDF,pdf}; do # в i передаем имена файлов типа TIF или tif
if [ -f $i ]; then # если файл существует
OUT=$OUT"$i "; # то собираем строку для дальнейшего сведения всех pdf в один
fi;
done;
# собираем все файлы PDF в один: gs - Ghostscript
gs —dNOPAUSE —sDEVICE=pdfwrite —sOUTPUTFILE=$OUTPUT —dBATCH $OUT;
echo "Все PDF соединены: "$OUTPUT;
The UNIX end-of-line character is a line feed/newline character (\n). The DOS/Windows end-of-line character is a carriage return, followed by a line feed/newline (\r\n).
To convert a UNIX file to DOS using sed (GNU sed 3.02.80 or later):
$ sed 's/$/\r/' UNIX_file > DOS_file
To convert a DOS file to UNIX file, use tr to remove the carriage return:
$ tr -d '\r' < DOS_file > UNIX_file
To accomplish the same thing using sed:
$ sed 's/^M//' DOS_file > UNIX_file
Note: To generate the ^M above, press Ctrl-V, then Ctrl-M.
To accomplish the same thing using vi:
Notice that some programs are not consistent in the way they insert the line breaks so you end up with some lines that have both a carrage return and a ^M and some lines that have a ^M and no carrage return (and so blend into one). There are two steps to clean this up.
1. replace all extraneous ^M:
:%s/^M$//g
BE SURE YOU MAKE the ^M USING "CTRL-V CTRL-M" NOT BY TYPING "CARROT M"! This expression will replace all the ^M's that have carriage returns after them with nothing. (The dollar ties the search to the end of a line)
2. replace all ^M's that need to have carriage returns:
:%s/^M/ /g
Once again: BE SURE YOU MAKE the ^M USING "CTRL-V CTRL-M" NOT BY TYPING "CARROT M"! This expression will replace all the ^M's that didn't have carriage returns after them with a carriage return.
grep -v "^$" a.txt > newfilename
grep -v '^[ ]*$' a.txt > b.txt #removes lines with whitespaces
sed '/^$/d' a.txt
sed 's/\t/ /g' oldcode.py >newcode.py
Simply using the vim command,
:retab
input file: a.txt :
Line 1
Line 2
WORD1
Line3
Line 4
WORD2
Line5
sed '/WORD1/,/WORD2/d' a.txt #remove range (excluded)
Line 1
Line 2
Line5
sed -n '/WORD1/,/WORD2/p' d.txt #get range (included)
WORD1
Line3
Line 4
WORD2
sed -e '1d' a.txt #skip 1st line
sed -e '1,5d' a.txt #skip 5 fist lines
Probably the best way to get your feet wet with regular expressions is to see a few examples. All of these examples will be accepted by sed as valid addresses to appear on the left side of a command. Here are a few:
#!/bin/bash
# ALL HTML FILES
FILES="*.html"
# for loop read each file
for f in $FILES
do
INF="$f"
OUTF="$f.out.tmp"
# replace javascript
sed '/<script type="text\/javascript"/,/<\/script>/d' $INF > $OUTF
/bin/cp $OUTF $INF
/bin/rm -f $OUTF
done
sed -i 's/old-word/new-word/g' *.txt
http://www.ibm.com/developerworks/aix/library/au-speakingunix10/index.html?S_TACT=105AGX20&S_CMP=EDU
To find the shells available on your UNIX system, use the command cat /etc/shells. To change your shell to any of the shells listed, use the chsh command.
$history
~ refers to your home directory. A similar shorthand, ~username, refers to username's home directory.
Recursively copies the /path/to/lots/of/stuff directory to your current directory, preserving the original time and date stamps:
$ cp -pr /path/to/lots/of/stuff .
You can use /dev/null as a zero-length file to empty existing files or create new, empty file
cat /dev/null > file.txt
cp /dev/null file.txt
http://en.wikipedia.org/wiki/Find
http://www.ibm.com/developerworks/aix/library/au-productivitytips.html?S_TACT=105AGX20&S_CMP=EDU
finds all of the text documents in your home directory that contain the words Monthly Report:
find /home/joe -type f -name '*.txt' -print | xargs grep -l "Monthly Report"
Suppressing the message "permission denied"
find / -name "myfile" -type f -print 2>/dev/null
Erase the given type of files from a directory tree"
find /directory/where/to/delete -name '*.ext' -exec rm '{}' +
Make all .txt files in /tmp/ and subdirectories, writable by others
find /tmp/ -name '*.txt' -exec chmod o+w '{}' +
You have two lists. One list is a superset of the other list. You want to identify all of the items that exist *only* in the larger list. Here’s how you do that:
cat small_list >> largelist; sort largelist | uniq -u
comm exists to compare contents of two files, which should be sorted lexically. It has 3 columns available in its output — the lines only in file 1, the lines only in file 2, and the lines in both.
This will show you lines only in test1.txt:
comm -23 test1.txt test2.txt
This will show you lines only in test2.txt:
comm -13 test1.txt test2.txt
This will show you lines only common to both files:
http://www.ibm.com/developerworks/aix/library/au-speakingunix5.html?S_TACT=105AGX20&S_CMP=EDU
http://www.ibm.com/developerworks/aix/library/au-speakingunix3.html?S_TACT=105AGX20&S_CMP=EDU
http://www.ibm.com/developerworks/aix/library/au-speakingunix3.html?S_TACT=105AGX20&S_CMP=EDU
ssh is the secure version of rsh, while scp and sftp are secure replacements for rcp and FTP, respectively.
There are 8 common manual page sections.
1. User commands (env, ls, echo, mkdir, tty)
2. System calls or kernel functions (link, sethostname, mkdir)
3. Library routines (acosh, asctime, btree, locale, XML::Parser)
4. Device related information (isdn_audio, mouse, tty, zero)
5. File format descriptions (keymaps, motd, wvdial.conf)
6. Games
7. Miscellaneous (arp, boot, regex, unix utf8)
8. System administration (debugfs, fdisk, fsck, mount, renice, rpm)
Other sections that you might find include
9 for Linux kernel documentation,
n for new documentation,
o for old documentation,
l for local documentation.
Some entries appear in multiple sections:
mkdir in sections 1 and 2, tty in sections 1 and 4.
You can specify a particular section, for example, man 4 tty or man 2 mkdir
or you can specify the -a option to list all applicable manual sections.
The whatis command searches man pages for the name you give and displays the name
information from the appropriate manual pages.
The apropos command does a keyword search of manual pages and lists ones containing
your keyword
$ man -k cron
cron (8) - daemon to execute scheduled commands (Vixie Cron)
crontab (1) - maintain crontab files for individual users (V3)
crontab (5) - tables for driving cron
dh_installcron (1) - install cron scripts into etc/cron.*
The curl command-line utility can get and put data, so it's ideal for transferring local files to remote servers. Better yet, the underpinning of curl -- the libcurl library -- has a rich application programming interface (API) that allows you to interrogate all the features of curl directly into your own applications. The C, C++, PHP, and Perl programming languages are just four of the many languages that can leverage libcurl. If your system lacks curl and libcurl, you can download the source code from the libcurl home page.
Because curl can copy local files to remote servers, it's ideal for small backups. For example, Listing 2 shows a shell script that copies a directory full of database dumps to a remote FTP server for safekeeping.
Example. Using curl to store database dumps remotely
foreach db (mydns mysql cms tv radio)
/usr/bin/mysqldump --ppassword --add-drop-table -Q --complete-insert $db > $db.sql
end
find dbs -mtime -1 -type f -name '*.sql' -print | foreach file (`xargs`)
curl -n -T $file ftp://ftp1.archive.example.com
The curl -n command forces curl to read your .netrc file. The -T option tells curl to upload the named file(s) to the given URL. If you omit the target file name, curl simply reuses the name of the file being uploaded.
Use the && control operator to combine two commands so that the second is run only if the first command returns a zero exit status. In other words, if the first command runs successfully, the second command runs. If the first command fails, the second command does not run at all. For example:
~ $ cd tmp/a/b/c && tar xvf ~/archive.tar
In this example, the contents of the archive are extracted into the ~/tmp/a/b/c directory unless that directory does not exist. If the directory does not exist, the tar command does not run, so nothing is extracted.
Similarly, the || control operator separates two commands and runs the second command only if the first command returns a non-zero exit status. In other words, if the first command is successful, the second command does not run. If the first command fails, the second command does run. This operator is often used when testing for whether a given directory exists and, if not, it creates one:
~ $ cd tmp/a/b/c || mkdir -p tmp/a/b/c
The type perl command reveals how the perl command is interpreted on the command line. Here, /usr/local/bin/perl is the expansion. The type -a command reveals all instances of Perl that the shell is aware of, which depends largely on the PATH variable.
$ ps > state.`date '+%F'`
$ w >> state.`date '+%F'`
Typing the back tick operation each time is a hassle, though. You could replace the sequence with this:
$ file=state.`date '+%F'`
$ ps > $file
$ w >> $file
But that's only a little more efficient and still error prone, because it's rather easy to use > instead of >> in the second or subsequent command. The easiest way to capture the output of a series of commands is to combine them within braces ({ }).
$ { ps; w } > state.`date '+%F'`
$ find / -name 'program.c' 2>/dev/null
case - insensitive search
$ find /home/david -i name 'index*'
Need to know where a variable or constant is defined in a large body of code?
$ find /path/to/src -type f | xargs grep -H -I -i -n string
The output of the command is a list of file names that contain string , including the line number and the specific text that matched. The -H and -n options preface each match with the file name and line number of each match, respectively. The -i option ignores case. -I (capital "I") skips binary files.
xargs runs the command you specify -- here, grep with all the listed options -- once for each argument provided through standard input. Assuming that the /path/to/src directory contains files a, b, and c, using find in combination with xargs is the equivalent of grep below
grep -H -I -i -n string a
grep -H -I -i -n string b
grep -H -I -i -n string c
grep -lr "foobar" /somefolder
In fact, searching a collection of files is so common that grep has its own option to recurse a file system hierarchy. Use -d recurse or its synonyms -R or -r. For example, use:
$ grep -H -I -i -n -R string /path/to/src
Exclude lines started from #, after that select 3rd column and show unique values in this column:
grep -v ^# filename | cut -d ' ' -f3 | uniq
Show the line number (-n) and quit after the first occurence (-m 1) and print the line# only
$ grep -n -m 1 find_me_pattern filename | cut -d":" -f1
8
Select all lines after line 8 (inclusive)
$ tail -n +8 filename
grep '#seq_group_name[[:space:]]Hs'
[[:space:]] above matches any white space including tabs
In general, if you need to type a character that the shell uses as a command (such as <TAB> or <ESC> in some shells), you can type
^V(Control-V) before the character to "escape" it like this:
$ grep "^V<TAB>" file ...
In bash, as you are typing the quoted regular expression, hit Ctrl-V then the TAB key to get the TAB in there.
In a shell script, getting a literal TAB into the quoted regular expression depends on the editor (Ctrl-Q in emacs).
As a last resort, use grep -f and use a pattern file.
cut -d" " -f2 foo.txt | sort | uniq -c
The UNIX kernel spawns the first process during the boot sequence.
The first process is called, appropriately enough, init, and the genealogy of all other system processes can be traced back to init. In fact, init's process number is 1. You can find the status of init by typing ps -l 1:
F S UID PID PPID C PRI NI ADDR SZ WCHAN TTY TIME CMD
4 S 0 1 0 0 68 0 - 373 select ? 0:02 init [2]
As you can see, the owner (UID) of init is 0 (root). Unlike every other process in the system, init doesn't have a parent process -- the Parent Process ID (PPID) is 0
http://www.ibm.com/developerworks/linux/library/l-linux-process-management/index.html
http://www.ibm.com/developerworks/aix/library/au-Uwininstall.html?S_TACT=105AGX20&S_CMP=EDU
http://gcc.gnu.org/onlinedocs/gcc-4.2.4/gcc
http://www.ibm.com/developerworks/linux/library/l-gcc-hacks/index.html
http://www.ibm.com/developerworks/linux/library/l-gcc4/index.html
Take advantage of -O -Wall -W -Wshadow -pedantic or the equivalent for your compiler, that is, the compiler directive that reports most usefully about incorrect and questionable syntax
cc -c -O -Wall example1.c
Find all dirs scanned by gcc for compilation:
gcc -print-search-dirs
How to change the order of OS'es in menu:
Login as "root" and edit your /boot/grub/grub.conf (or menu.lst).You'll see a line saying something like "default 0". Change the 0 to 1 or 2 or whatever is needed. The way you tell is based on what order your Linux and Windows options are listed in. If Windows is the second option to choose from, then use "default 1". Save the file and reboot.
http://www.linux-on-laptops.com/hosted/compaq-presario-c500-ubuntu.html
http://www.intellinuxgraphics.org/man.html
http://blogs.warwick.ac.uk/atrivedi/entry/using_915resolution_reply/
In UNIX-style operating systems, the package manager depends on your operating system. Common package managers include:
APT, which is found on systems that are based on Debian Linux. The many easy-to-use graphical interfaces include the popular Aptitude and Synaptic.
RPM, which is the Red Hat package manager. You can also install the automatic updater and package installer Yum, which enhances the ease of operation of RPM.
Ports, which is commonly found in BSD-type systems.
Portage, which is used by Gentoo Linux.
Установка wi-fi... Тоже часто достаточно затруднительно... Драйвера устанавливались через ndiswrapper (необходимо скопировать windows драйвер в Linux bcmwl5.ini и указать его ndiswrapper в качестве драйвера)...
Возможен вариант установки драйверов без ndiswrapper
2.1 ставим bcm43xx-fwcutter
2.2 sudo bcm43xx-fwcutter -w /lib/firmware bcmwl5.sys (windows-файл тоже идет в комплекте с драйвером, его необходимо скопировать в Linux, именно с ним и будет работать утилита bcm43xx-fwcutter)
2.3 ставим карту:
sudo modprobe bcm43xx
sudo iwconfig ethX rate 11M (у меня карта на eth1 поставилась)
sudo iwlist ethX scan
sudo iwconfig eth1 ap any
http://www.sysint.no/nedlasting/mbrfix.htm
MbrFix /drive <num> savembr <file> Save MBR and partitions to file
MbrFix /drive <num> restorembr <file> Restore MBR and partitions from file
MbrFix /drive <num> fixmbr {/vista} Update MBR code to W2K/XP/2003 or Vista
Drive numbering <num> starts on 0.
Partition numbering <part> starts on 1.
There is no concept uninstall in boot loaders, because if you uninstall a boot loader, an unbootable machine would simply remain. So all you need to do is overwrite another boot loader you like to your disk, that is, install the boot loader without uninstalling GRUB.
For example, if you want to install the boot loader for Windows, just run FDISK /MBR on Windows XP (not Vista)!.
On XP NTLoader can be repaired:
cd c:\
FIXBOOT C:
FIXMBR
BOOTCFG /rebuild
GRU uses /boot/grub/menu.lst for boot menu
NTLoader uses boot.ini http://support.microsoft.com/kb/289022/en-us
There is no Boot.ini file in Windows Vista.
Ways to modify the boot menu in Vista are:
bcdedit.exe, located in the Windows\system32\
http://www.sysint.no/en/Download.aspx
restoring Windows boot loader: http://support.microsoft.com/kb/927392
bootrec has the following options:
/FixMbr
The /FixMbr option writes a Windows Vista-compatible MBR to the system partition. This option does not overwrite the existing partition table. Use this option when you must resolve MBR corruption issues, or when you have to remove non-standard code from the MBR.
/FixBoot
The /FixBoot option writes a new boot sector to the system partition by using a boot sector that is compatible with Windows Vista. Use this option if one of the following conditions is true:
•
•
•
The boot sector has been replaced with a non-standard Windows Vista boot sector.
The boot sector is damaged.
An earlier Windows operating system has been installed after Windows Vista was installed. In this scenario, the computer starts by using Windows NT Loader (NTLDR) instead of Windows Boot Manager (Bootmgr.exe).
/ScanOs
The /ScanOs option scans all disks for installations that are compatible with Windows Vista. Additionally, this option displays the entries that are currently not in the BCD store. Use this option when there are Windows Vista installations that the Boot Manager menu does not list.
/RebuildBcd
The /RebuildBcd option scans all disks for installations that are compatible with Windows Vista. Additionally, this option lets you select the installations that you want to add to the BCD store. Use this option when you must completely rebuild the BCD.
The MBR is in the first sector of our hard disks. One sector of a hard disk is 512 bytes in size.
The MBR contains three important parts:
the hard disk's 64 byte partition table
2 byte 55aa signature to indicate to the BIOS that it is a bootable device.
That only leaves 446 bytes of room for the bootloader.
No bootloader can actually fit into such a small space. When we say we are installing a bootloader to our Master Boot Record, we don't really mean that exactly. The bootloader only puts a small code there just enough to point the BIOS somewhere else on the disk where there is more room. The part that fits in the MBR is called the 'IPL' or 'stage one' of the bootloader. That's the only thing that gets changed, and it's the only thing that needs to be changed again now.
Normally the simplest place to put 'stage2', the main (functional) part of a bootloader is in an operating system partition. Your Windows NTLDR second stage lives in your Windows partition and Grub stage2 lives in your Ubuntu partition.
The main part of the bootloader, be it Grub, LiLo or NTLDR, is the part that actually does the real work of booting the operating system's kernel.
When we are dual or multi-booting, Stage2 also gives us a Menu which allows us to choose which operating system you want during boot-up. When you were using Grub, if you choose Windows, GRUB redirects the BIOS back to Windows boot sector and the NTLDR bootloader, to 'chainload' Windows. It works like a relay system.
The problem is when you delete the Ubuntu partition, that second, vital part of Grub will suddenly be gone. When you try to boot up, your MBR will be pointing to an empty space, and there will be nothing there to offer you a menu to choose Windows anymore either.
You'll get a black monitor background with white text on it: 'GRUB error 22'
Meaning, 'No such partition. This error is returned if a partition is requested in the device part of a device- or full file name which isn't on the selected disk.' Grub won't be there anymore and you won't be able to boot Windows or any other operating system you might have installed. You'll just have one of those black screens with the white typing on it and a blinking cursor.
You can avoid that situation by moving GRUB out of Ubuntu and installing it in it's own partition, Keep GRUB and Make a Dedicated GRUB Partition
Or you can prepare your MBR for deleting GRUB. This can be easily done by overwriting the 446 bytes of bootloader code in the MBR with code for another bootloader before you delete the Ubuntu partition, and GRUB along with it.
Well, don't worry if you have deleted Ubuntu already, you can still overwrite your Master Boot Record's IPL code later at any time.
You can replace Grub's MBR code with the equivalent code for NTLDR and make the MBR point directly to Windows like it used to by simply using the same software you used when Windows was installed in the first place. This will overwrite GRUB's version of the 'IPL' in your MBR (or 'boot sector'), and replace it with the Windows version again.
The way to do that is very simple, exactly the same way you did it the first time. (Remember?)
You just re-install Windows....Well, that is one way to do it, but it will take you a while. There are a couple of faster ways...
Windows XP 'Recovery Console'
Just put in your Windows XP install CD, and boot into the recovery console and use the so-called 'FIXMBR' command.
PostgreSQL backup command that works from the command line:
/usr/local/postgres/bin/pg_dumpall -U pgadmin | gzip -9 > \
/usr/local/db/backups/pgsql/pgdump.`/bin/date +”%Y%m%d-%H%M%S”`.gz
But when I put this into my crontab, I started getting error messages about an EOF being reached before finding a closing ‘`’.
The answer is not to get rid of backticks and call an external script. The answer is to simply escape the percent signs! From `man 5 crontab`:
Percent-signs (%) in the command, unless escaped with backslash (\), will be changed into newline characters, and all data after the first % will be sent to the command as standard input.
daemon(3) xinetd
то что надо запускать на старте системы делает линк на скрипт запуска в /etc/rcN.d/ сами скрипты лежат в /etc/init.d/
http://www.unixguide.net/unix/programming/1.7.shtml
http://refspecs.freestandards.org/LSB_3.1.0/LSB-Core-generic/LSB-Core-generic/iniscrptact.html
http://www.cyberciti.biz/tips/linux-write-sys-v-init-script-to-start-stop-service.html