Linux Basic Course Notes by Altnix
Linux Basic Course Notes by Altnix
Index
1. Switching Terminals………………………………………………………………………( 2
–3 )
2. Fdisk……………………………………………………………………………………………
………………( 3 – 5 )
3. Format…………………………………………………………………………………………
………………( 6 - 7 )
4. Runlevels………………………………………………………………………………………
…………( 8 - 8 )
5. Symlinks & Hardlinks……………………………………………………………………( 9 –
16 )
6. Archiving & Compression……………………………………………………………( 17 –
28 )
7. Daemons&Process……………………………………………………………………………
……( 29 - 38 )
8. File Permissions………………………………………………………………………………
( 39 – 41 )
9. Umask…………………………………………………………………………………………
…………………( 42 - 43 )
10. Administarative cmnds and Lowlevel cmnds…………( 44 - 44 )
11. Understanding UNIX / Linux file system……………………( 45 - 49 )
12. FSTAB…………………………………………………………………………………………
…………………( 50 - 56 )
13. Bash……………………………………………………………………………………………
…………………( 57 - 63 )
14. Shell Scripting…………………………………………………………………………………(
64 - 73 )
15. RPM……………………………………………………………………………………………
……………………( 74 - 76 )
16. UserAdministration…………………………………………………………………………
( 77 - 83 )
17. PAM……………………………………………………………………………………………
……………………( 84 - 90 )
18. LVM……………………………………………………………………………………………
……………………( 91 - 98 )
19. The Linux Schedulers……………………………………………………………………( 99
– 106 )
20. QUOTA…………………………………………………………………………………………
…………………( 107 – 110 )
21. Kernel Compilation…………………………………………………………………………
( 111 – 119 )
22. Kernel
Tuning………………………………………………………………………………………( 120 –
128 )
23. Networking……………………………………………………………………………………
b.sadhiq
1 www.altnix.com
…………( 129 – 138 )
24. FTP……………………………………………………………………………………………
……………………( 139 – 145 )
25. NFS……………………………………………………………………………………………
……………………( 146 – 154 )
26. Network Info Service (NIS)………………………………………………… ( 155 – 160 )
27. Installation of autofs………………………………………………………………( 161 –
162 )
28. DHCP: Dynamic Host Configuration Protocol……………( 163 – 171 )
29. TcpWrappers…………………………………………………………………………………
…………( 172 – 176 )
30. Xinetd…………………………………………………………………………………………
………………( 177 – 179 )
31. SAMBA…………………………………………………………………………………………
…………………( 180 – 194 )
32. FIREWALL /
IPTABLES………………………………………………………………………( 195 – 212 )
33. DNS……………………………………………………………………………………………
……………………( 213 – 227 )
34. APACHE………………………………………………………………………………………
…………………( 228 – 248 )
35. SendMail………………………………………………………………………………………
……………( 249 – 262 )
36. SQUID…………………………………………………………………………………………
…………………( 263 – 274 )
37. Vi/Vim
Examples…………………………………………………………………………………( 275 – 282 )
Linux Practicals
Switching Terminals
Linux have 6 ttys by default, which we call as vcs & driver assigned to it is tty (/dev/tty*)
To – switch from gui use CTRL ALT F1 & to switch between the terminals use ALT -F2, F3...
To check current terminal use $ps command.
Basic Commands
$df -h -> same as my computer in windows
$fdsik -l -> list partition
$man <cmd> -> manual
$clear -> clear the screen
$^l -> clear the screen
b.sadhiq
2 www.altnix.com
$ls -> list content
$ls -l -> list content in long listing format
$ls -al -> list all subcontent in long listing format
$ll -> an alias for the above
$ls -R -> list content recursively
$l. -> list hidden files
$ls -F -> list content and classify them
$alias -> display all aliases for current user
$alias <statement> -> make alias eg alias c='clear'
$unalias <alias> -> remove alias eg unalias c
$exit -> log out from the system
$logout -> log out from the system
$^d -> log out from the system
$tree -> list content in a tree (hierarchial) diagram
$tree -d -> list subdirectories only - no files
$tree -p -> list content with their permissions
$cd <directory> -> change directory to...
$cd .. -> change to parent directory
$cd - -> change to previous directory
$cd -> change to home directory
$cd ~ -> change to home directory
$pushd -> change dir with pwd
$cat -> display a content of a file
$pwd -> print work (current) directory
$pwd -P -> print parent working dir of this symlink dir
$mkdir <directory> -> make directory
$mkdir -p <directory> -> make parent directories also if it does not exist
$touch -> make a 0 byte file if it does not exist
$cp -> copy (for files)
$cp -a -> copy (for directories)
$cp -p -> copy and preserve date and time
$mv -> move OR rename
$rmdir -> remove empty directory
$rm -> remove (for files)
$rm -f -> remove forcefully ("")
$rm -r -> remove recursively (for directories)
$rm -rf -> remove recursively and forcefully ( " " )
$cat -> display content of the file
$cat -n -> display content of the file and number the lines
$cal -> display calendar for current month
$date -> display system date and time
$date -s '<value>' -> change system date and time in mm/dd/yy
$hwclock -> display the hardware clock
$hwclock –hctosys -> set the system time from the hardware clock
$ln -s -> make a soft/sym/symbolic link
$ln -> make a hard link
$history -> display the list of the last 1000 commands
$! 100 -> Run command 100 in history
$vi -> text editor
b.sadhiq
3 www.altnix.com
$vimtutor -> vi manual withexercise
$pico -> pico manual withexercise
$mcedit -> mcedit manual withexercise
$joe -> joe manual withexercise
$aspell -c <filename>-> check the spelling in the file
$elinks -> check the web links
$file -> display the type of file
$which -> display the path of the binary
$whereis -> display all paths
$hostname -> display system name with domain
$id -> display id info of current user
$id -u -> display user id of current user
$id -un -> display username of current user
$id -g -> display group id of current user
$id -gn -> display groupname of current user
$uptime -> display for how long the system has been running
$tty -> display current terminal number
$users -> display no. of users currently logged in
$whoami -> display username of current user
$who -> display users logged in the system with their
respective terminals and time since logged in
$who am I -> display current user, terminal and uptime
$w -> display is details which files are open on which
terminal
http://www.oraclehome.co.uk/linux-commands.htm
$mkdir -p /opt/funny/test
$cd /opt/funny/test -- absolute path
$cd /opt/funny
$pwd
/opt/funny
$cd test –- relative path
Fdisk
Partitioning with fdisk
This section shows you how to actually partition your hard drive with the fdisk utility. Linux allows
only 4 primary partitions. You can have a much larger number of logical partitions by sub-dividing
one of the primary partitions. Only one of the primary partitions can be sub-divided.
Examples:
fdisk is started by typing (as root) fdisk device at the command prompt. device might be something
like /dev/hda or /dev/sda (see Section 2.1.1). The basic fdisk commands you need are:
Changes you make to the partition table do not take effect until you issue the write (w) command.
Here is a sample partition table:
The first line shows the geometry of your hard drive. It may not be physically accurate, but you can
accept it as though it were. The hard drive in this example is made of 32 double-sided platters with
one head on each side (probably not true). Each platter has 621 concentric tracks. A 3-dimensional
track (the same track on all disks) is called a cylinder. Each track is divided into 63 sectors. Each
sector contains 512 bytes of data. Therefore the block size in the partition table is 64 heads * 63
sectors * 512 bytes er...divided by 1024. (See 4 for discussion on problems with this calculation.)
The start and end values are cylinders.
$fdisk /dev/hdxx
n create a new partition
press <-| at first cylinder
define size +100M at Last cylinder
w write and quit
$sync
$partprobe -s /dev/hdxx
- rereads the partition table and updates the kernel table.
-s - to show the output
b.sadhiq
5 www.altnix.com
Format
for ext2 format use $mke2fs for ext3 use $mke2fs -j
$ mke2fs – j /dev/hdxx
-j stands for journaling as ext3 is a journaling filesystem.
$mkdir /newdir
$mount /dev/hdxx /newdir
for permanent mount use fstab
$vi /etc/fstab
Append - /dev/hdxx /mnt ext3 defaults 0 0
fstab is 9th out of the 10 most critical and important configuration files which is stored in /etc
directory, where all the configuration files are stored.
fstab stands for "File System TABle" and this file contains information of hard disk partitions and
removeable devices in the system. It contains infor-mation of where the partitions and removeable
devices are mounted and which device drivers are used for mounting them, which filesystem they are
using and what permissions are assigned to them.
Task
Task
Task
Ext2 vs Ext3
At some point in your install, you'll probably want to switch filesystem types. In the base install,
you're only given a choice of ext2 (short for ext2fs, or ``second extended filesystem,'' which is the
``standard'' UNIX filesystem7. Ext3fs8is the same as ext2, but provides journaling.For those as
sketchy on filesystem types as I am, it seems to be pretty basic. In the README on the original ext3
download page, the author answers the journaling question:
Q: What is journaling?
A: It means you don't have to fsck after a crash. Basically.
This is useful, because it means that every time your screen whites out
and crashes while choosing the right video card (Section 1.2.1), you
don't have to sit through an entire filesystem check of every inode. The
filesystem still fscks itself every X mounts or Y days, but doesn't put
you through the entire wait every time you crash it.To convert
partition,s to the ext3 filesystem, you need to cleanly unmount them,
b.sadhiq
7 www.altnix.com
boot something else (like the Debian CD you installed from -- see
Section 6.2 on how to do this), and then, on a console, do:
tune2fs -j /dev/hdaX
That's it. When you reboot, your partitions should come up as ext3.
b.sadhiq
8 www.altnix.com
Runlevels
1 Single-User mode
4 Not used/User-definable
6 Reboot
b.sadhiq
9 www.altnix.com
Files are arranged in directories (or folders if you prefer that term), and each file can be reached
through a series of directories and sub-directories from the root - correct? Yes ... BUT ... there are
some times that the same file can be reached through several names, and on Unix and Linux systems
this is known as a "link".
There are two ways a link can be set up.
Hard Link
A Hard Link is where a file has two names which are both on
an equal weighting, and both of the file names in the "inode
table" point directly to the blocks on the disc that contain the
data. See diagram to the left.
Symbolic Link
A Symbolic Link is where a file has one main name, but there's an extra entry in the file name table
that refers any accesses back to the main name. This is slighly slower at runtime that a hard link, but
it's more flexible and much more often used in day to day admin work.
Symbolic links are set up using the ln command with the -s option - so for example
ln -s ab.txt cd.txt
will set up a new name cd.txt that points to the (existing) file ab.txt. If you do a log listing (ls -l) of a
directory that contains a symbolic link, you'll be told that it's a symbolic link with an "l" in the first
column, and you'll be told where the file links to in the file name column. Very easy to spot!
b.sadhiq
10 www.altnix.com
2. ls -l command shows all links with second column value 1 and the link points to original file.
3. Link has the path for original file and not the contents.
4.Removing soft link doesn't affect anything but removing original file the link becomes dangling
link which points to nonexistant file.
In Softlink Inode is diff and the linked file will b a shortcut of first file
Hard Links :
1. All Links have same inode number.
2.ls -l command shows all the links with the link column(Second) shows No. of links.
3. Links have actual file contents
4.Removing any link just reduces the link count but doesn't affect other links.
In Hardlink Inode is same and both are independent
Soft link can create directories but hard link can't. Hard links created within that particular file
system but soft link cross that file system
So linking across file system will lead into confusing references for UNIX or
Linux. For example, consider following scenario
Output:
$ ls -i file1
782263
b.sadhiq
11 www.altnix.com
$ ls -i file2
782263
As you can see inode number is same for hard link file called file2 in inode
table under /home file system. Now if you try to create a hard link for /tmp
file system it will lead to confusing references for UNIX or Linux file system.
Is that a link no. 782263 in the /home or /tmp file system? To avoid this
problem UNIX or Linux does not allow creating hard links across file system
boundaries. Continue reading rest of the Understanding Linux file system series
Practical
$mkdir /opt/new-file
$mkdir /usr/local/link-file
$vi /opt/newfile/abc
Append some content & save the above file
Now create a softlink for abc as xyz under /usr/local/link-file
$pushd /usr/local/link-file
$pwd
$ln -s /opt/new-file/abc xyz
Or
If u want to create symlink as from /home then
$pushd /home
$ln -s /opt/new-file/abc /usr/loal/link-file/xyz
Now check with the following & also note symlink files always have 777 perm
$ll | grep ^l
Also chk the size of both file and its self expalinatory
Now
$Append some data in xyz file u will get the same under abc
Now try removing the parent file in our case “abc”
$rm –rf /opt/new-file/abc
Now verify the symblink
$ll /usr/local/link-file/
Your file has broken symlink so its called orphaned
So when ever u delete a parent file it will effect & if softlink is deleted there is no effect in softlinks
Softlink files have different inodes of parent
Softlink can also cross partitions.
b.sadhiq
12 www.altnix.com
Now what if u want run a binary from different path and with different name
$which mount
$ln –s /sbin/mount /opt/mapping
$pushd /opt/
$./mapping
$mkdir /opt/hard-link
$pushd /opt/link-file
Create a new file name file1 and aapend data
$echo “This is an new file” > file1
$cat file1
Also try creating 2 to 3 links for a single parent file in softlink and hardlink.
More
Today we're going to test your virtual imagination ability! You're probably familiar with shortcuts in
Microsoft Windows or aliases on the Mac. Linux has something, or actually some things similar,
called hard links and symbolic links.
Symbolic links (also called symlinks or softlinks) most resemble Windows shortcuts. They contain a
pathname to a target file. Hard links are a bit different. They are listings that contain information
about the file. Linux files don't actually live in directories. They are assigned an inode number,
which Linux uses to locate files. So a file can have multiple hardlinks, appearing in multiple
b.sadhiq
13 www.altnix.com
directories, but isn't deleted until there are no remaining hardlinks to it. Here are some other
differences between hardlinks and symlinks:
Hardlinks
Let's do a little experiment to demonstrate the case. Make a new directory called Test and then move
into it. to do that, type:
$ mkdir Test
$ cd Test
Then make a file called FileA:
$ vi FileA
Press the I key to enter Insert mode:
i
Then type in some funny lines of text (like "Why did the chicken cross the road?") and save the file
by typing:
Esc
ZZ
So, you made a file called FileA in a new directory called "Test" in your /home. It contains an old
and maybe not so funny joke. Now, let's make a hardlink to FileA. We'll call the hardlink FileB.
$ ln FileA FileB
Then use the "i" argument to list the inodes for both FileA and its hardlink. Type:
$ ls -il FileA FileB
This is what you get:
1482256 -rw-r--r-- 2 sadhiq sadhiq 21 May 5 15:55 FileA
1482256 -rw-r--r-- 2 sadhiq sadhiq 21 May 5 15:55 FileB
You can see that both FileA and FileB have the same inode number (1482256). Also both files have
the same file permissions and the same size. Because that size is reported for the same inode, it does
not consume any extra space on your HD!
Next, remove the original FileA:
$ rm FileA
And have a look at the content of the "link" FileB:
b.sadhiq
14 www.altnix.com
$ cat FileB
You will still be able to read the funny line of text you typed. Hardlinks are cool.
Symlinks
Staying in the same test directory as above, let's make a symlink to FileB. Call the symlink FileC:
$ ln -s FileB FileC
Then use the i argument again to list the inodes.
$ ls -il FileB FileC
This is what you'll get:
You'll notice the inodes are different and the symlink got a "l" before the rwxrwxrwx. The link has
different permissions than the original file because it is just a symbolic link. Its real content is just a
string pointing to the original file. The size of the symlink (5) is the size of its string. (The "-> FileB"
at the end shows you where the link points to.
$ cat FileB
$ cat FileC
$ rm FileB
$ ls
You'll see the symlink FileC is still there, but if you try to list the contents:
$ cat FileC
It will tell you that there is no such file or directory. You can still list the inode. Typing:
b.sadhiq
15 www.altnix.com
$ ls -il FileC
will still give you:
1482226 lrwxrwxrwx 1 sadhiq sadhiq 5 May 5 16:22 FileC -> FileB
But the symlink is obsolete because the original file was removed, as were all the hard links. So the
file was deleted even though the symlink remains. (Hope you're still following.)
OK. The test is over, so you can delete the Test directory:
$ cd ..
$ rm -rf Test (r stands for recursive and f is for force)
Note: Be cautious using "rm -rf"; it's very powerful. If someone tells you to do "rm -rf /" as root, you
might loose all your files and directories on your / partition! Not good advice.
Now you know how to create (and remove) hardlinks and symlinks to make it easier to access files
and run programs. See you on the links!
b.sadhiq
16 www.altnix.com
Archiving & Compression
Archiving means that you take 10 files and combine them into one file, with no difference in size. If
you start with 10 100KB files and archive them, the resulting single file is 1000KB. On the other
hand, if you compress those 10 files, you might find that the resulting files range from only a few
kilobytes to close to the original size of 100KB, depending upon the original file type.
ll of the archive and compression formats in this chapter — zip, gzip, bzip2, and tar — are popular,
but
Zip
zip is probably the world's most widely used format. That's because of its almost universal use on
Windows, but zip and unzip are well supported among all major (and most minor) operating
systems,
Gzip
gzip was designed as an open-source replacement for an older Unix program, compress. It's found on
virtually every Unix-based system in the world, including Linux and Mac OS X, but it is much less
common on Windows. If you're sending files back and forth to users of Unix-based machines, gzip is
a safe choice.
Bzip2
The bzip2 command is the new kid on the block. Designed to supersede gzip, bzip2 creates smaller
files, but at the cost of speed. That said, computers are so fast nowadays that most users won't notice
much of a difference between the times it takes gzip or bzip2 to compress a group of files.
Practical
zip both archives and compresses files, thus making it great for sending multiple files as email
attachments, backing up items, or for saving disk space.
Create
$mkdir -p /opt/test/zip_dir;cd /opt/test/zip_dir
Append man pages to a file
$man ls > file-ls;cat /etc/fstab > file-fstab;cat /root/anaconda.cfg > file-anaconda
$ls -lh
$ls -al
Zip the files to man-file.zip
$zip man-file.zip *
b.sadhiq
17 www.altnix.com
$ls -lF
$ man ls > file-ls.txt;cat /etc/fstab > file.txt;cat /root/anaconda.cfg > file-anaconda.txt;man fdisk >
file1.cfg;man fstab > fstab.cfg;man man > man.cfg
Try compressing the files created using zip and verify the size of moby.zip files
$ zip -0 moby.zip1 *.txt
$ ls -l
$ zip -1 moby.zip2 *.cfg
$ ls -l
$ zip -9 moby.zip3 *.cfg
$ ls -l
$unzip –l
$unzip --ql backup.zip
verbose
unzip -v moby2.zip
List type
$ unzip -t moby2.zip
Now try any the following same as zip under any dir
b.sadhiq
18 www.altnix.com
gzip paradise_lost.txt
$ ls -l
$ ls -l
$ gzip -c paradise_lost.txt > paradise_lost.txt.gz
$ gzip -c -1 moby-dick.txt > moby-dick.txt.gz
$ ls -l
$ gzip -c -9 moby-dick.txt > moby-dick.txt.gz
$ ls -l
$ gzip -t paradise_lost.txt.gz
$ gunzip -c paradise_lost.txt.gz > paradise_lost.txt
$ bzip2 moby-dick.txt
$ ls -l
$ bzip2 -c moby-dick.txt > moby-dick.txt.bz2
$ ls -l
$ bzip2 -c -1 moby-dick.txt > moby-dick.txt.bz2
$ bzip2 -c -9 moby-dick.txt > moby-dick.txt.bz2
$ ls -l
$ bunzip2 moby-dick.txt.bz2
$ bunzip2 -c moby-dick.txt.bz2 > moby-dick.txt
$ bunzip2 -t paradise_lost.txt.gz
b.sadhiq
19 www.altnix.com
Book zip -0 zip -1 zip -9
Moby-Dick 0% 54% 61%
Paradise Lost 0% 50% 56%
Job 0% 58% 65%
Total (in bytes) 1848444 869946 747730
b.sadhiq
20 www.altnix.com
Because gzip is the most common compression program used in concert with tar, we'll focus on that.
You could do it this way:
$mkdir -p /mnt/common/moby-dick
$cd /mnt/common/moby-dick
$ man ls > file-ls.txt;cat /etc/fstab > file.txt;cat /root/anaconda.cfg > file-anaconda.txt;man fdisk >
file1.cfg;man fstab > fstab.cfg;man man > man.cfg
$cd ..
$pwd
/mnt/common/
$ ls -l moby-dick/*
$ tar -cf moby1.tar moby-dick/ | gzip -c > moby1.tar.gz
$ ls -l
That method works, but it's just too much typing! There's a much easier way that should be your
default. It involves two new options for tar: -z (or --gzip), which invokes gzip from within tar so you
don't have to do so manually, and -v (or --verbose), which isn't required here but is always useful, as
it keeps you notified as to what tar is doing as it runs.
$ ls -l moby-dick/*
$ ls -l
The usual extension for a file that has had the tar and then the gzip commands used on it is .tar.gz;
however, you could use .tgz and .tar.gzip if you like.
Note - It's entirely possible to use bzip2 with tar instead of gzip. Your command would look like this
(note the -j option, which is where bzip2 comes in):
In that case, the extension should be .tar.bz2, although you may also use .tar.bzip2, .tbz2, or .tbz.
Yes, it's very confusing that using gzip or bzip2 might both result in a file ending with .tbz. This is a
strong argument for using anything but that particular extension to keep confusion to a minimum.
Test Files That Will Be Untarred and Uncompressed
Before you take apart a tarball (whether or not it was also compressed using gzip), it's a really good
idea to test it. First, you'll know if the tarball is corrupted, saving yourself hair pulling when files
don't seem to work. Second, you'll know if the person who created the tarball thoughtfully tarred up
a directory containing 100 files, or instead thoughtlessly tarred up 100 individual files, which you're
just about to spew all over your desktop.
b.sadhiq
21 www.altnix.com
To test your tarball (once again assuming it was also zipped using gzip), use the -t (or --list) option.
This tells you the permissions, ownership, file size, and time for each file. In addition, because every
line begins with moby-dick/, you can see that you're going to end up with a directory that contains
within it all the files and subdirectories that accompany the tarball, which is a relief.
Be sure that the -f is the last option because after that you're going to specify the name of the .tar.gz
file. If you don't, tar complains:
Now that you've ensured that your .tar.gz file isn't corrupted, it's time to actually open it up, as you'll
see in the following section.
Note - If you're testing a tarball that was compressed using bzip2, just use this command instead:
To create a .tar.gz file, you used a set of options: -zcvf. To untar and uncompress the resulting file,
you only make one substitution: -x (or --extract) for -c (or --create).
$ ls -l
$ tar -zxvf moby.tar.gz
$ ls -l
Make sure you always test the file before you open it, as covered in the previous section, "Test Files
That Will Be Untarred and Uncompressed." That means the order of commands you should run will
look like this:
$ tar -zvtf moby.tar.gz
$ tar -zxvf moby.tar.gz
Note - If you're opening a tarball that was compressed using bzip2, just use this command instead:
$ tar -jxvf moby.tar.bz2
b.sadhiq
22 www.altnix.com
$ tar –tvf ramu.tar.gz
Now check
untar in a different directory
If you've got a gzipped tarball and you want to untar it in a directory other than the one you're in, do
the following:
$ cd /mnt/backup
$ zcat reports.tar.gz | ( cd ./otherdir; tar zxvf - )
$ ls
Understand the above cmd , note -: “-“ is used in after the arguments given to tar.
Extract individual files from a tarball
If you need a file that you've put into a tarball and you don't want to extract the whole file, you can
do the following.
First, get a list of the files and find the one you want
$ cd /mnt/common/moby-dick
$ tar -zltf moby1.tar.gz
b.sadhiq
23 www.altnix.com
Then extract the one you want
$ tar zxvf moby1.tar.gz file-anaconda.txt
Note before attempting the following you have to create tar files as cbz.tar with he files you are
going to extract.
$ tar -xf cbz.tar --wildcards --no-anchored 'pic*'
b.sadhiq
24 www.altnix.com
-x: instructs tar to extract files.
-f: specifies filename / tarball name.
-v: Verbose (show progress while extracting files).
-j : filter archive through bzip2, use to decompress .bz2 files.
-z: filter archive through gzip, use to decompress .gz files.
--wildcards: instructs tar to treat command line arguments as globbing patterns.
--no-anchored: informs it that the patterns apply to member names after any / delimiter.
PATH is an environmental variable in Linux and other Unix-like operating systems that tells the
shell which directories to search for executable files (i.e., ready-to-run programs) in response to
commands issued by a user. It increases both the convenience and the safety of such operating
systems and is widely considered to be the single most important environmental variable.
Environmental variables are a class of variables (i.e., items whose values can be changed) that tell
the shell how to behave as the user works at the command line (i.e., in a text-only mode) or with
shell scripts (i.e., short programs written in a shell programming language). A shell is a program that
provides the traditional, text-only user interface for Unix-like operating systems; its primary function
is to read commands that are typed in at the command line and then execute (i.e., run) them.
b.sadhiq
25 www.altnix.com
Login as root
$id
$echo $PATH
$useradd john
$passwd john
$su - john
$id
Verify john's PATH
$echo $PATH
you cant find :/sbin:/usr/sbin so u cant run cmnd's fdisk, shred under the same.
$fdisk -l
will get command not found.
Now try
$ fdisk -l
Note-: The cmd is executed but fdisk binary will work only by uid 0 (root), bcoz it's
b.sadhiq
26 www.altnix.com
programmed like that.
So search for the cmd in /sbin & /usr/sbin , which can run by other uid's.
Then set the /opt dir to the users path as mentioned above or copy the script under the
following PATH . (which is already set)
set. For eg-:
$PATH=$PATH:/opt
$cd /
$testscript
or
$cp /opt/testscript /bin or /usr/local/bin etc...
Now try running the script
$cd /
$testscript
Daemons&Process
b.sadhiq
27 www.altnix.com
Application Daemon are those which can be killed & will have no
effect to the sysytem
System Daemons are those which can be killed & will effect the
system.
$ kill -9 <sysd-pid>
OBJECTIVES
Defining a process
Process states
Process management
Job control
System Information
Performance Related Information
What is a Process?
A process has many components and properties
exec thread
PID
priority
memory context
environment
file descriptors
security credentials
Process Ancestry
init is the first process started at boot time - always has PID 1.
Except init, every process has a parent.
Processes can be both a parent and a child at the same time.
b.sadhiq
28 www.altnix.com
Understand the Multiuser Environment.
One of the goals of UNIX was to enable a number of users to use the
system simultaneously (multiuser capability). Because several users
might also want to use several different programs simultaneously,
mechanisms must be available to allow these programs to run
simultaneously (multitasking capability).
The implementation of a multiuser and multitasking system appears
to be simultaneous in a single processor system, but this is only
possible in a multiprocessor system.
This can lead to one process hijacking the processor, leaving other
processes without processing time and blocking the system. The
operating system coordinates access to the resources available in
the system (hard drives, tapes, interfaces). If there is
competition among processes, e.g., for access to a tape device,
only one process can be granted access. The others must be
rejected. This coordination task is very complex and no operating
system is able to implement an ideal solution. The classic problem
involves a situation in which two or more processes exclusively
need the same resources, as illustrated in the following resource
conflict:
b.sadhiq
29 www.altnix.com
Multithreading is an extension of multitasking, and helps solve
this problem. In multithreading, a number of parts independent from
one another (threads) can be produced within a process.
They each start with identical but independent copies of the RAM
image, but being separate processes, they have unique PIDs.
The child then calls the system call exec using the command name
and arguments inherited from the parent. From this point on, the
child and parent can go their separate ways. However, since they
both have access to the same open files and pipes, there is a
potential for communication between them (interprocess
communication). The shell is the parent of most of your processes
The Shell is a Process. The principle process you interact with is
the shell. The shell can run some commands (builtins) itself but
for most commands, it forks a separate process. It usually waits
for the command process to finish and then gives you a new shell
prompt.
What if you could tell the shell not to wait? You could then
instruct the shell to do something else while the first command was
running in the background Voila! Multiprocessing in action!
$ uptime
18:18:16 up 3 days, 7:37, 5 users, load average: 0.00, 0.00, 0.00
Tells you exactly how long your system is been running from
b.sadhiq
31 www.altnix.com
1mt 5mt 15mt
load average: 0.00, 0.00, 0.00
$ cat /proc/meminfo
/proc
$ top
Runnable
Stopped
Page Wait
Non-Interruptable wait
OPTIONS
-q Renice top to -20 so that it will run faster. This can be used
when the system is being very sluggish to improve the possibility
of discovering the problem.
-dcount Show only count displays, then exit. A display is
considered to be one update of the screen. This option allows the
user to select the number of displays he wants to see before top
automatically exits. For intelligent terminals, no upper limit is
set. The default is 1 for dumb terminals.
-stime Set the delay between screen updates to time seconds. The
default delay between updates is 5 seconds.
INTERACTIVE MODE
b.sadhiq
32 www.altnix.com
h or ? Display a summary of the commands (help screen). Version
information is included in this display.
Q Quit top.
K Send a signal ("kill" by default) to a list of processes
R Change the priority (the "nice") of a list of processes.
S Change the number of seconds to delay between displays (prompt
for new number).
O Change the order in which the display is sorted. This command
is not available on all systems. The sort key names vary from
system to system but usually include: "cpu", "res", "size", "time".
The default is cpu.
THE DISPLAY
PID every process runs have the process ID USER owner of the
process
PRI Current priority of the process.
NICE Nice amount in the range -20 to 20, as established by the use
of the command nice.
RES Resident memory: current amount of process memory that resides
in physical memory, given in kilobytes.
STATE Current state (typically one of "sleep", "run", "idl",
"zomb", or "stop").
TIME Number of system and user cpu seconds that the process has
used.
SIZE Amount of memory the process needs
CPU Percentage of available cpu time used by this process.
COMMAND Name of the command that the process is currently running
For BSD formats and when the stat keyword is used, additional
characters may
be displayed:
b.sadhiq
33 www.altnix.com
high-priority (not nice to other users)
Free
$ free –m
$ free –c 5 –s 3
$ free -m
total used free shared buffers cached
Mem: 1003 981 22 0 91 688
-/+ buffers/cache: 201 802
Swap: 1058 0 1058
As you can see, my system has 1 GB of ram and 981 MB are in use
leaving 22MB free. If you look at the cached column, it shows 688
MB free. This is a good thing as cached memory is basically free
memory. This is where programs a user may have used earlier and
then quit are stored, just on the off chance that the user might
start up the program again. On the other hand, if the user starts
up a new program, this cache can be replaced for the new program
that is running. It should be mentioned that the caching works not
just for recently loaded programs but also for data, i.e. recently
used files and directories. Program loading is just a special case
of loading a file.
The -/+ buffers/cache section is will show you what is really going
on. In my example, it shows that only 201 MB are in use and that
802 MB are free. The rest is just cached. What a user really needs
to worry about is that last line. If you start seeing the swap file
go into use that means that you are out of free ram and you are now
using space on your hard disk to help out. If this starts
happening, the best thing to do is run the top command and see what
is taking up all the memory. Then, if it is an unneeded program,
shut it down.
Signals
The signal system call is much more diverse. When a signal occurs,
the kernel checks to see if the user had executed a signal system
call and was therefor expecting a signal. If the call was to ignore
the signal, the kernel returns
Otherwise, it checks to see if it was a trap or kill signal If not,
it processes the signal If it was a trap or kill signal, the kernel
checks to see if core should be dumped and then calls the exit
routine to terminate the user process.
$kill -l
SIGHUP Hang-up
SIGINT Interrupt
SIGQIT Quit
SIGINS Illegal Instruction
SIGTRAP Trace Trap
SIGKILL Kill
SIGSYS Bad argument to system call
SIGPIPE Write on pipe with no one to read it
SIGTERM Software termination signal from kill
SIGSTOP Stop signal
See /usr/include/sys/signal.h
Signal Acceptance
b.sadhiq
35 www.altnix.com
Ignore it
Process it
Terminate
Process Termination
Process Cleanup
This happens when the parent issues the wait system call, which
returns the
terminated child's PID
b.sadhiq
36 www.altnix.com
$ kill -9 <PID>
$ kill –l -> lists all available signals
$ killall
$ pidof <pidname>
$ pgrep <pidname>
$ pkill <pidname>
Job Control
Option Explanation:
l Shows the job number, its PID, its status, and its name
p Shows just the PID of running jobs
Issuing the jobs command without any options will show a list of
all running, stopped and
suspended background jobs.
An example of using the job command is illustrated below:
$ jobs -l
[1]- 1229 Running tail -n5 -f /var/log/secure
[2]+ 1230 Stopped joe fred
In the above example there are two jobs in the background, one
running and one stopped.
b.sadhiq
37 www.altnix.com
File Permissions
1. The permissions that apply to the owner of the file. The owner
of a file is by default the user that created the file1.
2. The permissions that apply to all members of the group that is
associated with the file.
3. The permissions that apply to all other users on the system.
4. Permissions can only be changed by the owner, and root of
course.
read allow the user to read the contents of the file, for instance
with cat or less.
read allow the user to view the contents of the directory, for
instance with ls.
b.sadhiq
38 www.altnix.com
execute allow the user to use this directory as its current
working directory. In other words: allow the user to cd into it.
r - read
w - write
x – execute
• u for the owner (user) of the file
• g for the group assigned to the file
• o for all other users
• a for all (owner+group+others)
Symbolic way
$ useradd sachin
$ passwd sachin
$ useradd dhoni
$ passwd dhoni
$ groupadd market;usermod –G market dhoni
$ useradd shewag
$ passwd shewag
$ groupadd market;usermod –G market shewag
$ mkdir /opt/perm/;touch /opt/perm/file{1..6}
$ mkdir /opt/perm/{data1,data2}
$ cd /opt/perm
$ ll –d data1
drwxr-xr-x 2 root root 4096 Jul 29 20:15 data1
$ chown sachin data1
$ ll –d data1
$ chgrp market data1
$ ll –d data1
$ chmod u-w data1
$ ll –d data1
$ chmod g+w data1
$ ll –d data1
$ chmod o+w,o-rx data1
$ ll –d data1
$ ll –d data2
drwxr-xr-x 2 root root 4096 Jul 29 20:15 data2
$ chown –Rv sachin.market data2
$ ll –d data2
$ chmod u-rwx data2
$ ll –d data2
$ chmod g+w,g-x data2
b.sadhiq
39 www.altnix.com
$ ll –d data2
$ chmod –Rv o+w,o-r data2
$ ll –d data2
Octal way
$ ll file1
-rw-r--r—- 1 root root 0 Jul 29 20:15 file1
$ chmod 777 file1
$ ll file1
$ chmod 666 file2
$ ll file1
$ chmod 467 file3
$ ll file1
$ chmod 541 file4
$ ll file1
$ chmod 724 file5
$ ll file1
$ chmod 000 file6
$ chmod 0 file6
b.sadhiq
40 www.altnix.com
Umask
User Mask
New files should not be created with 666! To avoid this problem a
permission mask exists. It is obviously important to know with what
permissions new files and directories are created. Under Linux,
it’s not really easy to tell, since the default permissions can be
modified by setting a umask (with the umask command).
So, with a umask of 022, the default permissions for a file will
become 644 (rw-r--r--, 666-022) and the default permissions for a
directory will become 755 (rwx-r-xr-x, 777-022).
• Red Hat assigns a umask of 002 to regular users, and 022 to root.
• SUSE assigns a umask of 022 to all users, including root.
- What is your current default permission (umask)
- How do you set your default permission?
- Umask defines what permissions, in octal, cannot be set
- Umask stands for user file creation mode mask
- In essence, system sets the default permission on the file and
directory
- If i would have no "umask:, the default permission on the file
would be "777"
- Usually set in a login script
- it is the inverse of the normal octal permissions
- "umask -S" shows your umask in symbolic form
- linux removes the "x" permissions (or the 1) so 777 is the same
as 666
b.sadhiq
41 www.altnix.com
Normally, you can subtract from 666 but be very careful as it may
be 777. In Fedora Linux, it is 666 but lets test it out...
--> View the current umask setting
$umask
777
- 022
------
755
I
change Symbolic way
1.Give 704 to abc file
2.Give 417 to abc file
3.Give 006 to abc file
4.give 707 to abc file
II
change Octal way
1.change to octal mode r-xrw-r-x to abc chmod 565
2.change to octal mode --xr-xr-- to abc chmod 154
3.change to octal mode rw----rwx to abc chmod 607
4.change to octal mode ---r-x--- to abc chmod 050
III
symbolic way
1.change r-xrw-r-x to rw--wxrwx to abc chmod u+w,u-x,g-r,g+x,o+w
2.change --xr-xr-- to rwxrwxrw- to abc chmod u+rw,g+w,o+w
3.change rw----rwx to --x----wx to abc chmod u-rw,u+x,o-r
4.change ---r-x--- to rwx-w-rwx to abc chmo u+rwx,g-rx,g+w,o+rwx
b.sadhiq
42 www.altnix.com
b.sadhiq
43 www.altnix.com
Administarative cmnds and Lowlevel cmnds
Lowlevel
/bin This directory contains executable programs which are needed in
single user mode and to bring the system up or repair it.
Administrative
/sbin Like /bin, This directory holds commands needed to boot the sys-
tem, but which are usually not executed by normal users.
Lowlevel
/usr/bin This is the primary directory for executable programs. Most
programs executed by normal users which are not needed for boot-
ing or for repairing the system and which are not installed
locally should be placed in this directory.
Administrative
/usr/sbin This directory contains program binaries for system administra-
tion which are not essential for the boot process, for mounting
/usr, or for system repair.
b.sadhiq
44 www.altnix.com
A conceptual understanding of file system, especially data structure and related terms will help you
become a successful system administrator. I have seen many new Linux system administrator w/o
any clue about file system. The conceptual knowledge can be applied to restore file system in an
emergency situation.
What is a File?
File are collection of data items stored on disk. Or it’s device which can store the information, data,
music (mp3), picture, movie, sound, book etc. In fact what ever you store in computer it must be
inform of file. Files are always associated with devices like hard disk ,floppy disk etc. File is the last
object in your file system tree. See Linux/UNIX – rules for naming file and directory names.
What is a directory?
Directory is group of files. Directory is divided into two types: Root directory – Strictly speaking,
there is only one root directory in your system, which is denoted by / (forward slash). It is root of
your entire file system and can not be renamed or deleted.
Sub directory - Directory under root (/) directory is subdirectory which can be created,
renamed by the user.
Directories are used to organize your data files, programs more efficiently.
3. Ext2: This is like UNIX file system. It has the concepts of blocks, inodes and directories.
4. Ext3: It is ext2 filesystem enhanced with journalling capabilities. Journalling allows fast file
system recovery. Supports POSIX ACL (Access Control Lists).
5. Isofs (iso9660): Used by CDROM file system.
6. Sysfs: It is a ram-based filesystem initially based on ramfs. It is use to exporting kernel
objects so that end user can use it easily.
7. Procfs: The proc file system acts as an interface to internal data structures in the kernel. It can
be used to obtain information about the system and to change certain kernel parameters at runtime
using sysctl command. For example you can find out cpuinfo with following command:
A UNIX file system is a collection of files and directories stored. Each file system is stored in a
separate whole disk partition. The following are a few of the file system:
/ - Special file system that incorporates the files under several directories including /dev,
/sbin, /tmp etc
/usr - Stores application programs
/var - Stores log files, mails and other data
b.sadhiq
45 www.altnix.com
/tmp - Stores temporary files
=> /bin : This directory contains executable programs which are needed in single user mode and to
bring the system up or repair it.
=> /boot : Contains static files for the boot loader. This directory only holds the files which are
needed during the boot process.
=> /dev : Special or device files, which refer to physical devices such as hard disk, keyboard,
monitor, mouse and modem etc
=> /etc : Contains configuration files which are local to the machine. Some larger software packages,
like Apache, can have their own subdirectories below /etc i.e. /etc/httpd. Some important
subdirectories in /etc:
=> /home : Your sweet home to store data and other files. However in
large installation yhe structure of /home directory depends on local
administration decisions.
=> /lib : This directory should hold those shared libraries that are
necessary to boot the system and to run the commands in the root
filesystem.
=> /lib64 : 64 bit shared libraries that are necessary to boot the
system and to run the commands in the root filesystem.
=> /mnt : This directory contains mount points for temporarily mounted
filesystems
=> /proc : This is a mount point for the proc filesystem, which
provides information about running processes and the kernel.
b.sadhiq
46 www.altnix.com
=> /root : This directory is usually the home directory for the root user.
=> /sbin : Like /bin, this directory holds commands needed to boot the
system, but which are usually not executed by normal users, root /
admin user specific commands goes here.
=> /tmp : This directory contains temporary files which may be deleted
with no notice, such as by a regular job or at system boot up.
=> /var : This directory contains files which may change in size, such
as spool and log files.
/etc/skel : When a new user account is created, files from this directory are usually copied
into the user’s home directory.
* /etc/sysconfig : Important configuration file used by SysV script stored in /etc/init.d and /etc.rcX
directories
/etc/cron.* : cron daemon configuration files which is used to execute scheduled commands
Go to /var/logs directory:#
$cd /var/logsView common log file /var/log/messages using any one of the
following command:
$ tail -f /var/log/messages
$ less /var/log/messages
$ more -f /var/log/messages
$ vi /var/log/messagesOutput:
Ref -
http://www.securityfocus.com/infocus/1872
http://tldp.org/LDP/Linux-Filesystem-Hierarchy/html/index.html
http://en.wikipedia.org/wiki/Filesystem_Hierarchy_Standard
http://www.comptechdoc.org/os/linux/howlinuxworks/linux_hlfilesystems.html
b.sadhiq
48 www.altnix.com
FSTAB
fstab is 9th out of the 10 most critical and important configuration files which
is stored in /etc directory, where all the configuration files are stored.
fstab stands for "File System TABle" and this file contains information of hard
disk partitions and removeable devices in the system. It contains infor-mation of
where the partitions and removeable devices are mounted and which device drivers are
used for mounting them, which filesystem they are using and what permissions are
assigned to them.
The file fstab contains descriptive information about the various file systems.
fstab is only read by programs, and not written; it is the duty of the system
administrator to properly create and maintain this file. Each filesystem is described on
a separate line; fields on each line are separated by tabs or spaces. Lines starting with
b.sadhiq
49 www.altnix.com
'#' are comments. The order of records in fstab is important because fsck, mount, and
umount sequentially iterate through fstab doing their thing.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
LABEL=/ / ext3 defaults 1 1
LABEL=/boot /boot ext3 defaults 1 2
none /dev/pts devpts gid=5,mode=620 0 0
LABEL=/home /home ext3 defaults 1 2
none /proc proc defaults 0 0
none /dev/shm tmpfs defaults 0 0
LABEL=/tmp /tmp ext3 defaults 1 2
LABEL=/u01 /u01 ext3 defaults 1 2
LABEL=/usr /usr ext3 defaults 1 2
LABEL=/var /var ext3 defaults 1 2
/dev/hda6 swap swap defaults 0 0
/dev/cdrom /mnt/cdrom udf,iso9660 noauto,ro 0 0
/dev/fd0 /mnt/floppy auto noauto,owner,kudzu 0 0
/dev/sda1 /mnt/usb_hdd vfat noauto 0 0
\________/ \___________/ \_________/ \____________/ \_/ \_/
| | | | | |
1st 2nd 3rd 4th 5th 6th
There are total six columns in the fstab file separated by spaces or tabs. Each column holds
different information about the device. For adding any new device add a fresh row. Each row stands
for a partition or removeable device in the system.
b.sadhiq
50 www.altnix.com
1st Column :
~~~~~~~~~~
The first column contains the partitions's label, eg. "LABEL=/boot" or driver's path, eg.
"/dev/cdrom". Device driver's path tells the system to mount the device with the mentioned device
driver.
2nd Column :
~~~~~~~~~~
The second field (fs_file) describes the mount point for the filesystem.
For swap partitions, this field should be specified as `none'. If the name of
the mount point contains spaces these can be escaped as `\040'.
The second column shows the mount point specified for a device in the fstab file. The mount
points actually is the directory where that particular device(mentioned in the first column) will be
mounted and through which we can view and modify the content of that partition. You can change
the default mount point listed in the column, if you are not satisfied with the one your system has
given you.
3rd Column :
~~~~~~~~~~
The third column in the file specifies the file system type of the device or partition.Many diffrent
file systems are supported by Linux and most common ones are,
1) autofs
2) devpts
3) ext2
4) ext3
5) iso9660
6) nfs
7) ntfs
8) proc
9) swap
10) tmpfs
11) udf
12) ufs
13) vfat
14) xfs
If you are not sure of the file system type of the device then set the value to "auto" and the
system will itself determine the file system type and will mount the device with that file system.
4th Column :
~~~~~~~~~~
The fourth column is for permissions to be given to the partition at the time of booting. There are
many options which constitutes the forth column. They are as follows : -
1) ro - Read Only
b.sadhiq
51 www.altnix.com
2) rw - Read Write
5) user - Any user can mount, but only unmount device mounted by him
7) users - Every user can mount and also unmount the device mounted by others
10) nodev - User cannot use device driver to mount the device
13) async - Asynchronous, whenever a file is saved it will be first saved in the RAM and
after 30 seconds all the queued files will be written on the hard disk
14) sync - Synchronous, whenever a file is saved it will be directly written to the hard
disk
15) suit - Allow set-user-identifier for the device where users are allowed to run
binaries even though they do not have execute permissions. These binaries are
temporarily made available to them to perform certain tasks
17) defaults - auto, rw, dev, async, suid, exec & nouser
5th Column :
~~~~~~~~~~
The 5th column is for backup option. This column contains either 0 or 1. Where "0" stands for
"NO" and "1" stands for "YES". The system checks it at the time of booting, if it's "0", dump will
ignore that filesystem but if its "1" then it will enable backup option.Backup is supported on only
ext3 file system, hence only for ext3 file system it should be enabled and for rest of the file systems
it should be disabled.
6th Column :
b.sadhiq
52 www.altnix.com
~~~~~~~~~~
The 6th column is for "fsck" option. fsck stands for file system check. This column defines
the order in which the system should scan the partitions on start up. The / partition is assigned top
priority i.e. 1 and the rest of the partitions are assigned second priority i.e. 2. If value is set to 0
means no scanning will be done at the time of startup.If same number is given to different partitions
then the partitions are scanned together with equal priority. This minimizes error because if a link is
present on one partition with higher priority and the source file in another partition with a priority
lower than the link, it will give an error.
The dmesg command is used to write the kernel messages in Linux and other Unix-like operating
systems to standard output (which by default is the display screen).
A kernel is the core of an operating system. It is the first part of the operating system that is loaded
into memory when a computer boots up (i.e., starts up), and it controls virtually everything on a
system. The numerous messages generated by the kernel that appear on the display screen as a
computer boots up show the hardware devices that the kernel detects and indicate whether it is able
to configure them.
dmesg obtains its data by reading the kernel ring buffer. A buffer is a portion of a computer's
memory that is set aside as a temporary holding place for data that is being sent to or received from
an external device, such as a hard disk drive (HDD), printer or keyboard. A ring buffer is a buffer of
fixed size for which any new data added to it overwrites the oldest data in it.
dmesg can be very useful when troubleshooting or just trying to obtain information about the
hardware on a system. Its basic syntax is dmesg [options]
Invoking dmesg without any of its options (which are rarely used) causes it to write all the kernel
messages to standard output. This usually produces far too many lines to fit into the display screen
all at once, and thus only the final messages are visible. However, the output can be redirected to the
less command through the use of a pipe (designated by the vertical bar character), thereby allowing
the startup messages to be viewed one screenful at a time:
dmesg | less
less allows the output to be moved forward one screenful at a time by pressing the SPACE bar,
backward by pressing the b key and removed by pressing the q key. (The more command could have
been used here instead of the less command; however, less is newer than more and has additional
functions, including the ability to return to previous pages of the output.)
When a user encounters a problem with the system, it can be convenient to write the output of dmesg
to a file and then send that file by e-mail to a system administrator or other knowledgeable person for
assistance. For example, the output could be redirected to a file named boot_messages using the
output redirection operator (designated by a rightward facing angle bracket) as follows:
dmesg > boot_messages
Because of the length of the output of dmesg, it can be convenient to pipe its output to grep, a filter
which searches for any lines that contain the string (i.e., sequence of characters) following it. The -i
option can be used to tell grep to ignore the case (i.e., lower case or upper case) of the letters in the
string. For example, the following command lists all references to USB (universal serial bus) devices
b.sadhiq
53 www.altnix.com
in the kernel messages:
dmesg | grep -i usb
And the following tells dmesg to show all serial ports (which are represented by the string tty):
dmesg | grep -i tty
The dmesg and grep combination can also be used to show how much physical memory (i.e., RAM)
is available on the system:
dmesg | grep -i memory
The following command checks to confirm that the HDD(s) is running in DMA (direct memory
access) mode:
dmesg | grep -i dma
The output of dmesg is maintained in the log file /var/log/dmesg, and it can thus also be easily
viewed by reading that file with a text editor, such as vi or gedit, or with a command such as cat, e.g.,
cat /var/log/dmesg | less http://linuxgazette.net/issue59/nazario.html
lspci is a command on Unix-like operating systems that prints detailed information about all PCI
buses and devices in the system. It is based on a common portable library libpci which offers access
to the PCI configuration space on a variety of operating systems.
Example output on a Linux system:
# lspci
00:00.0 Host bridge: Intel Corporation 82815 815 Chipset Host Bridge and Memory Controller Hub
(rev 11)
00:02.0 VGA compatible controller: Intel Corporation 82815 CGC [Chipset Graphics Controller]
(rev 11)
00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev 03)
00:1f.0 ISA bridge: Intel Corporation 82801BAM ISA Bridge (LPC) (rev 03)
00:1f.1 IDE interface: Intel Corporation 82801BAM IDE U100 (rev 03)
00:1f.2 USB Controller: Intel Corporation 82801BA/BAM USB (Hub #1) (rev 03)
00:1f.3 SMBus: Intel Corporation 82801BA/BAM SMBus (rev 03)
00:1f.4 USB Controller: Intel Corporation 82801BA/BAM USB (Hub #2) (rev 03)
00:1f.5 Multimedia audio controller: Intel Corporation 82801BA/BAM AC'97 Audio (rev 03)
01:03.0 CardBus bridge: O2 Micro, Inc. OZ6933/711E1 CardBus/SmartCardBus Controller (rev 01)
01:03.1 CardBus bridge: O2 Micro, Inc. OZ6933/711E1 CardBus/SmartCardBus Controller (rev 01)
01:0b.0 PCI bridge: Actiontec Electronics Inc Mini-PCI bridge (rev 11)
02:04.0 Ethernet controller: Intel Corporation 82557/8/9 [Ethernet Pro 100] (rev 08)
02:08.0 Communication controller: Agere Systems WinModem 56k (rev 01)
If many devices are shown as unknown (e.g. "Unknown device 2830 (rev 02)), issuing the command
'update-pciids' will usually do the trick.
Detail Information
$lspci –vv
b.sadhiq
54 www.altnix.com
To update pci-ids information to /usr/share/hwdata/pci.ids $update-pciids
Bash
Bash
Descended from the Bourne Shell, Bash is a GNU product, the "Bourne
Again SHell." It's the standard command line interface on most Linux
machines. It excels at interactivity, supporting command line editing,
completion, and recall. It also supports configurable prompts - most
people realize this, but don't know how much can be done.
Bash converts the text script to binary (0,1).
This chapter is based on Chapters 6 through 8 of the Siever book, Linux in a Nutshell [Siever 2003] .
Figure 1 illustrates some of the shells found on UNIX/Linux systems.
Shell Description
bash Bourne-again shell (GNU)
csh C shell (BSD)
jsh Job control shell (SVR4)
ksh Korn shell (Bell Labs)
rc Plan 9 shell (Bell Labs)
rsh Remote shell (TCP/IP)
sh Bourne shell (UNIX 7th Edition)
tcsh Popular extension of the C shell
zsh Popular extension of the Korn shell
Common Features
Figure 2 illustrates some features that are common to both bash and tcsh.
Symbol Description
> Redirect output
>> Append output to a file
< Redirect input
<< Redirect input ("Here" document)
| Pipe output
b.sadhiq
55 www.altnix.com
Symbol Description
& Run process in background
; Separate commands on one line
* Match character(s) in filename
? Match single character in filename
!n Repeat command number n
[...] Match any characters enclosed
(...) Execute commands in a subshell
"..." Quote allowing variable and command expansion
'...' Literal string
`...` Command substitution
\ Quote following character
$var Variable expansion
$$ Process ID
$0 Command name
$n nth argument (0...9)
$* All arguments
$? Exit status
Begin comment
b.sadhiq
56 www.altnix.com
Refrence
http://en.wikipedia.org/wiki/Bash
Practical
BASH
Login root
passwd *****
when you login u get vcs (virtual konsole) with the help of tty driver (/dev/tty*) and a shell
(/bin/bash)
In Linux default shell is bash (/bin/bash)
To check the shells supported by your OS
$cat /etc/shells
To swap in other shell
$sh
$ksh etc....
check your bash
$ps
to start another bash just run the following, which is inherited
$bash
Now chk with
$ps you will have two bash (parent and child) if u will kill the child bash it wont effect to parent
but if u do viceversa then chk what happens.
$List the bash shell pid
$ps -el | grep bash
Now try loading bash and can kill with the cmd
$kill -9 <pid of bash>
Bash
Here's a neat Bash-prompt trick. At a basic Bash prompt, press the up-arrow key and you'll see the
last command you typed in. Press again and again to rotate through all the commands you typed
previously, stored for you in Bash history.
You will only see the commands you typed in for your login, whether that's for a specific user or for
root.
b.sadhiq
57 www.altnix.com
Here are some additional Bash tips, all of which are commands that you type at the Bash prompt:
To display a full numbered list of all stored commands, type:
history
To retrieve the eighth command previously entered, type:
!8
To get the last command that started with the letter V, type:
!v
Bash history isn't lost when you reboot or shutdown either. Clever isn't it?
Bash Shortcuts
To go along with Bash basics above, here are some basic shorthand commands:
To change to the directory of a specific user when you have more than one, type the previous
command followed by the name of the user:
cd ~bruno
cd ~anna
For really fast typing don't forget to use the Tab-key for auto-completion.
Typing the following does the same as the previous example, a lot faster:
cd ~/D {press Tab Key} /B {press Tab key}
Bash Script
You probably know that the "rm" command removes (or deletes) a file permanently. Wouldn't it be
nice if we could move it to the recycle bin with a simple command instead? You can. To do that, you
can make your own command called Del with a brief script.
To build the script, open a terminal and type the following lines:
su
{type your root password} (Note: you should see the # prompt)
kedit /usr/bin/del
This opens a new window in the keditor into which you should type the following script:
#!/bin/bash
mv $1 ~/Desktop/Trash
#End script
The next step is to save the file using kedit's File, Save As menu command. Then, back at the Bash
prompt logged in as root, type this line to make the new script executable:
b.sadhiq
58 www.altnix.com
$chmod 0775 /usr/bin/del
Now whenever you type the del command, it will run your script. For example, if you came across
the "tessst" file and you wanted to move it to the trash, you could just type this at the Bash prompt:
$del tessst
That will perform the same action as:
Did you know you can use the Tab key to auto-complete commands on the command line? Just type
a few characters that start a command and press the Tab key. The command or name of an existing
directory or file will be completed.
Try this. Type the following and then press the Tab key:
$ cd /u
Now add an "s" and press Tab, type "h" and press Tab. The result should be:
$ cd /usr/share/
Now type "f" "o" "n" and press Tab, "t" press Tab, "d" Tab, and press the Enter key. That should put
you in:
/usr/share/fonts/ttf/decoratives
That'll bring up a list of all the fancy ttf fonts on your system.
So next time you have to type a long command like this:
# cp synthesis.hdlist.update_source.cz /var/lib/urpmi/synthesis.hdlist.update_source.cz
... try it this way instead:
# cp sy (Tab key), /v (Tab key), li (Tab key), u (Tab key), sy (Tab key)
And because the full command is on your screen, the light will go on if it hasn't already! (Note: This
command works only if the file "synthesis.hdlist.update_source.cz" is in your /home directory)
How about a little more on the Tab key and commands. If you don't remember exactly how a
command was written, type in the first character or two and hit the Tab key. You'll get a list of all the
commands that start with the same character(s).
If you wish to know what a certain command does -- say, mkmanifest -- use the whatis command,
like this:
$ whatis mkmanifest
b.sadhiq
59 www.altnix.com
mkmanifest (1) - Makes list of file names and their DOS 8+3 equivalents.
Introduction to BASH
Bash is the shell, or command language interpreter, that will appear in the GNU operating system. It
is intended to conform to the IEEE POSIX P1003.2/ISO 9945.2 Shell and Tools standard. It offers
functional improvements over sh for both programming and interactive use. In addition, most sh
scripts can be run by Bash without modification.
The improvements offered by BASH include:
The Bash syntax is an improved version of the Bourne shell syntax. In most cases Bourne shell
scripts can be executed by Bash without any problems.
In Linux, a lot of work is done using a command line shell. Linux comes preinstalled with Bash.
Many other shells are available under Linux:
b.sadhiq
60 www.altnix.com
Shell Scripting
* If you do not specify an interpreter line, the default is usually the /bin/sh. But, it is recommended
that you set #!/bin/bash line.
/bin/sh
#!/bin/sh
sh is the standard command interpreter for the system. The current version of sh is in the process of
being changed to conform with the POSIX 1003.2 and 1003.2a specifications for the shell.
Did you know?
* It is the shell that lets you run different commands without having to type the full pathname to
them even when they do not exist in the current directory.
* It is the shell that expands wildcard characters, such as * or ?, thus saving you laborious typing.
* It is the shell that gives you the ability to run previously run commands without having to type
the full command again by pressing the up arrow, or pulling up a complete list with the history
b.sadhiq
61 www.altnix.com
command.
* It is the shell that does input, output and error redirection.
* Shell scripts can take input from a user or file and output them to the screen.
* Whenever you find yourself doing the same task over and over again you should use shell
scripting, i.e., repetitive task automation.
o Creating your own power tools/utilities.
o Automating command input or entry.
o Customizing administrative tasks.
o Creating simple applications.
o Since scripts are well tested, the chances of errors are reduced while configuring services or
system administration tasks such as adding new users.
* JOB_SPEC &
* (( expression ))
* . filename
* [[:]]
* [ arg... ]
* expression
* alias
* bg
* bind
* builtin
* caller
* case
* command
* compgen
* complete
* continue
* declare
* dirs
* disown
* echo
b.sadhiq
62 www.altnix.com
* enable
* eval
* exec
* exit
* export
* false
* fc
* fg
OR
command2 is executed if, and only if, command1 returns an exit status of zero (true). In other words,
run command1 and if it is successfull, then run command2.
Example
The echo command will only run if the rm command exits successfully with a status of zero. If file is
deleted successfully the rm command set the exit stats to zero and echo command get executed.
Lookup a username in /etc/passwd file
test ! -d /tmp/foo && { read -p "Directory /tmp/foo not found. Hit [Enter] to exit..." enter; exit 1; }
Syntax:
command1 || command2
OR
First_command || Second_command
command2 is executed if, and only if, command1 returns a non-zero exit status. In other words, run
command1 successfully or run command2.
Example
b.sadhiq
63 www.altnix.com
$cat /etc/shadow 2>/dev/null || echo "Failed to open file"
The cat command will try to display /etc/shadow file and it (the cat command) sets the exit stats to
non-zero value if it failed to open /etc/shadow file. Therefore, 'Failed to open file' will be displayed
cat command failed to open the file.
Find username else display an error
Try it as follows:
$cat /etc/shadow 2>/dev/null && echo "File successfully opened." || echo "Failed to open file."
$test $(id -u) -eq 0 && echo "You are root" || echo "You are NOT root"
OR
$test $(id -u) -eq 0 && echo "Root user can run this script." || echo "Use sudo or su to become a root
user."
Shell functions
* Function performs a specific task. For example, add or delete a user account.
* Function used like normal command.
* In other high level programming languages function is also known as procedure, method,
subroutine, or routine.
b.sadhiq
64 www.altnix.com
hello() function can be used like normal command. To execute, simply type:
hello
You can pass command line arguments to user defined functions. Define hello as follows:
hello champu
Sample outputs:
* One line functions inside { ... } must end with a semicolon. Otherwise you get an error on
screen:
Above will not work. However, the following will work (notice semicolon at the end):
To display defined function names use the declare command. Type the following command at a shell
prompt:
$declare -f
Sample outputs:
declare -f command_not_found_handle
declare -f genpasswd
declare -f grabmp3
declare -f hello
declare -f mp3
declare -f xrpm
b.sadhiq
65 www.altnix.com
Display Function Source Code
declare -f
OR
declare -f | less
The test command is used to check file types and compare values. Test is used in conditional
execution. It is used for:
test condition
OR
OR
OR
Sample Output:
Yes
Yes
Rather than test whether a number is greater than 2, you have used redirection to create an empty file
called 2 (see shell redirection). To test for greater than, use the -gt operator (see numeric operator
b.sadhiq
66 www.altnix.com
syntax):
Yes
You need to use the test command while make decision. Try the following examples and note down
its output:
Write Scripts
1.
#!/bin/bash
read -p "Enter # 5 : " number
if test $number == 5
then
echo "Thanks for entering # 5"
fi
if test $number != 5
then
echo "I told you to enter # 5. Please try again."
fi
2.
#!/bin/bash
clear
echo -e "What is your name : \c"
read name
echo hello $name. Welcome to Shell programming
sleep 2
clear
echo -e "Would you like to see a listing of your files ? [y/n]: \c"
read yn
if [ $yn = y ]
then
ls
fi
b.sadhiq
67 www.altnix.com
sleep 1
echo -e "Would you like to see who all are logged in ? [y/n]: \c"
read yn
if [ $yn = y ]
then
who
fi
sleep 1
echo Would you like to see which dir you are in \?
read yn
if [ $yn = y ]
then
pwd
fi
3.
#!/bin/sh
clear
echo Enter file name to copy
read apple
echo Enter file name to copy to
read mango
if cp $apple $mango > /dev/null 2>&1
then
echo Files copied ok Congrats!!
else
echo Error !!!!!!!!!!!!!!! Contact Mr ABC at Ext 101
fi
4.
#!/bin/bash
# -lt, -le, -gt, -ge, -ne, -eq : Use this for numerical comparisions
# <, <=, >, >=, <>, = : Use this for String comparisions
clear
tput cup 10 10
echo -e "Enter a no from 1 to 5 : \c"
read num
if test $num -lt 6
then
tput cup 12 10
echo "Good"
else
tput cup 12 10
echo "Sorry only between 1 to 6"
fi
b.sadhiq
68 www.altnix.com
5.
#!/bin/bash
## see man test
clear
echo Enter file name
read filename
if [ -z $filename ]
then
echo You have to enter some file name
echo Exiting....
sleep 2
exit
fi
if [ -f $filename ]
then
echo The filename you entered exists !!
echo Deleting $filename .....
sleep 2
rm -f $filename
echo Deleted $filename .....
sleep 1
cls
else
echo The filename you entered does not exist !!!
fi
6.
#!/bin/bash
read -p "Enter a number : " n
if [ $n -gt 0 ]; then
echo "$n is a positive."
elif [ $n -lt 0 ]
then
echo "$n is a negative."
elif [ $n -eq 0 ]
then
echo "$n is zero number."
else
echo "Oops! $n is not a number."
fi
7.
b.sadhiq
69 www.altnix.com
#!/bin/bash
clear
echo -e "Enter a number from 1 to 3 : \c"
read num
case $num in
1) echo You have entered 1
;;
2) echo You have entered 2
;;
3) echo You have entered 3
;;
*) echo Between 1 to 3 only !!
;;
esac
8.
#!/bin/sh
echo Enter dog/cat/parrot
read animal
case $animal in
cat|kat) echo You have entered cat
;;
dog) echo You have entered dog
;;
parrot|crow) echo You have entered parrot or crow
;;
*) echo Invalid entry !!
;;
esac
RPM
Rpm is a powerful Package Manager for Red Hat, Suse and Fedora
Linux. It can be used to build, install, query, verify, update, and
remove/erase individual software packages. A Package consists of an
archive of files, and package information, including name, version,
and description:
RPM FACTS
--------------------------------
1. RPM is free - GPL
The RPM Package Manager or RPM is a tool which was developed by Red
Hat Software, who still maintain it, but released under the GNU
General Public Licence (GPL) and has proven to be so popular, that
a lot of other distribution manufacturers use it as well.
RPM is a very versatile program which solves a lot of problems that
a distributor of software typically faces:
• Management of source files
• Management of the build process
• A distribution method and format for binary files, including pre-
and
postinstall scripts. RPMs can be created by anyone, not only the
manufacturer of your distribution.
b.sadhiq
71 www.altnix.com
Where to get RPMs
-
http://rpmseek.com
http://rpmfind.net
http://www.redhat.com
http://freshrpms.net
http://rpm.pbone.net
http://dag.wieers.com
http://rpmforge.net
http://filewatcher.com
Task
Download xmms-1.2.10-1.i386.rpm & try to install
$ rpm –ivh xmms-1.2.10-1.i386.rpm
Will ask for dependency, Download dep from the given sites above &
install the same.
Download
glib-1.2.10-627.i586.rpm
gtk-1.2.10-926.i586.rpm
gtk-32bit-1.2.10-926.x86_64.rpm
b.sadhiq
73 www.altnix.com
UserAdministration
Only root (i.e. system administrator)can use adduser command
To create new users. It is not allow to other users.
b.sadhiq
74 www.altnix.com
[root@localhost root]$ userdel john ---> `userdel' command delete
only name of the user from
/home directory but it's
data remain there. It's
/usr/sbin/userdel
Config. files
Refers
|----/etc/login.defs
|
|----/etc/default/useradd
Updates
|----/etc/passwd
|
|----/etc/group
|
|----/etc/shadow
|
|----/etc/gshadow
b.sadhiq
75 www.altnix.com
/etc/login.defs
<1> /etc/login.defs : It keep the information of directory where
mailboxes reside or name of file relative to the home directory,
Password duration & how many users can login.
"Passwd file" & "Group file" get the information of userid &
groupid from this file.
"shadow file" & "Gshadow file" get the information of user login &
password duration of user from this file.
UID-MIN 500
UID-MAX 60000
The id of user start from 500 & max it is 60000 which is default
according to REDHAT but we can customise it.
GID-MIN 500
GID-MAX 60000
/etc/default/useradd
b.sadhiq
76 www.altnix.com
1. Group=100 ----> It's default no. of groups according to Redhat
which can customise.
4. Expire ----> It's number of days for the account of user will
expire.
/etc/passwd
<3> /etc/passwd : * It keeps the record of new user when create by
superuser. Each line is entry of new user. It is
text file & has details of all system users.
champu:x:500:500::/home/champu:/bin/bash
\____/\_/\__/\_/||\___________/\______/
| | | | || | |
1 2 3 4 5 6 7
b.sadhiq
77 www.altnix.com
(*) --- user can't login.
( ) --- user can login without passwd.
/etc/group
<4> /etc/group : This file keep the information of group. It has
four field of each group of each line so it is called `system
group database'.
Accounts:x:500:
| | | |
1 2 3 4
b.sadhiq
78 www.altnix.com
2. field (x) : It contain group password which is somewhere else
if exist & it's password is same of first member of group.
One user can makes members of his group by using command `usermod
-G' which is run by only root.
Option Description
-c comment Comment for the user
Home directory to be used instead of
-d home-dir
default /home/username/
Date for the account to be disabled in
-e date
the format YYYY-MM-DD
Number of days after the password
expires until the account is disabled.
(If 0 is specified, the account is
-f days disabled immediately after the password
expires. If -1 is specified, the
account is not be disabled after the
password expires.)
Group name or group number for the
-g group-name user's default group (The group must
exist prior to being specified here.)
List of additional (other than default)
group names or group numbers, separated
-G group-list by commas, of which the user is a
member. (The groups must exist prior to
being specified here.)
Create the home directory if it does
-m
not exist
b.sadhiq
79 www.altnix.com
Option Description
-M Do not create the home directory
Do not create a user private group for
-n
the user
Create a system account with a UID less
-r
than 500 and without a home directory
-p password The password encrypted with crypt
User's login shell, which defaults to
-s
/bin/bash
User ID for the user, which must be
-u uid
unique and greater than 499
groupadd <group-name>
Option Description
Group ID for the group, which must be unique
-g gid
and greater than 499
Create a system group with a GID less than
-r
500
Exit with an error if the group already
exists (The group is not altered.) If -g and
-f
-f are specified, but the group already
exists, the -g option is ignored
Password aging
$chage –l root
$chage -d 0 username
Change shell
$chsh <username>
FingerInformation
$chfn <username>
$finger
b.sadhiq
80 www.altnix.com
PAM
PAM is the Pluggable Authentication Module, invented by Sun. It's a beautiful concept, but it can be
confusing and even intimidating at first. We're going to look at it on a RedHat system, but other
Linuxes will be similar - some details may vary, but the basic ideas will be the same.
The first thing to understand is that PAM is NOT something like tcpd (tcp wrappers) or xinetd that
encloses and restricts access to some service. An application needs to be "PAM aware"; it needs to
have been written and compiled specifically to use PAM. There are tremendous advantages in doing
so, and most applications with any interest in security will be PAM aware.
PAM is about security - checking to see that a service should be used or not. Most of us first learned
about PAM when we were told that login was using it, but PAM can do much more than just validate
passwords. A lot of applications now use PAM - even things like SAMBA can call on PAM for
authentication.
The big advantage here is that security is no longer the application's concern: if PAM says its OK, its
OK. That makes things easier for the application, and it makes things easier for the system
administrator. PAM consults text configuration files to see what security actions to take for an
application, and the administrator can add and subtract new rules at any time. PAM is also
extensible: should someone invent a device that can read your brain waves and determine ill intent,
all we need is a PAM module that can use that device. Change a few files, and login now reads your
mind and grants or denies access appropriately. We're a bit away from that feature, but there are a
tremendous number of available PAM modules that administrators can use.
Configuration Files
b.sadhiq
81 www.altnix.com
On modern RedHat systems, the configuration files are found in /etc/pam.d, one file for each PAM
aware application (plus a special "other" file we'll get to later). One word of warning: changes to
these files take effect instantly. You aren't going to get logged out if you make a mistake here. but if
you DO screw up and blithely log out, you may not be able to log back in. So test changes before
you exit.
We're going to use a very simple example to get started here. In a number of articles here, we've
talked about SSH Security. Most of those articles have been about changes to ssh's configuration
files, but here we'll use PAM to add some additional restriction: the time of day you are allowed to
use ssh. To do this, we need a PAM module called pam_time.so - it's probably in your /lib/security/
directory already. It uses a configuration file "/etc/security/time.conf". That file is pretty well
commented, so I'm not going to go into detail about it and will just say that I added the line
sshd;*;*;!Al2200-0400
which says that sshd cannot be used between 10:00 PM and 4:00 AM. I'm usually rather soundly
asleep between those times, so why let ssh be used? I could still login at the console if I woke up
with an urgent need to see an ls of my /tmp directory, but I couldn't ssh in, period. Configuring the
time.conf file by itself doesn't affect ssh; we need to add the pam module to /etc/pam.d/sshd. My file
ends up looking like this:
#%PAM-1.0
account required pam_time.so
auth required pam_stack.so service=system-auth
auth required pam_nologin.so
account required pam_stack.so service=system-auth
password required pam_stack.so service=system-auth
session required pam_stack.so service=system-auth
session required pam_limits.so
session optional pam_console.so
I put the time.so module first so that it is the very first thing that is checked. If that module doesn't
give sshd a green light, that's the end of it: no access. That's the meaning of "required": the module
HAS to say that it is happy. The "account" type is specified here. That's a bit of a confusing thing:
we have "account", "auth", "password" and "session". The man page isn't all that helpful:
account - provide account verification types of service: has the user's
password expired?; is this user permitted access to the requested ser-
vice?
Linux-PAM.
b.sadhiq
82 www.altnix.com
password - this group's responsibility is the task of updating authen-tication mechanisms.
Typically, such services are strongly coupled to those of the auth group. Some authentication
mechanisms lend themselves
well to being updated with such a function. Standard UN*X password-
based access is the obvious example: please enter a replacement pass-word.
session - this group of tasks cover things that should be done prior to a service being given and
after it is withdrawn. Such tasks include the
maintenance of audit trails and the mounting of the user's home direc- tory. The session
management group is important as it provides both an opening and closing hook for modules to
affect the services available to a user.
I think that the distinction between account and session in that man page is a little confusing. I think
it would be quite reasonable to think you should use "session" for this module. Now, sometimes you
have a man page for the module that shows you what to use, but pam_time doesn't help us there.
Technically, it's not up to the library: the application is the one that is checking with account or
session, but keep this in mind: session happens AFTER authentication. I liked the older PAM manual
better, which said:
auth modules provide the actual authentication, perhaps asking
for and checking a password, and they set "credentials" such
as group membership or kerberos "tickets."
session modules are used once a user has been authenticated to allow them to use their
account,perhaps mounting the user's home directory or making their mailbox available.
Stacking
In this case, I only wanted to apply this restriction to ssh. If I'm physically at the box, I want no time
restrictions. If I DID want these same restrictions, I'd make the same change to /etc/pam.d/login. But
what if there are a whole bunch of things I want to apply the same rules to? RedHat has a special
module "pam_stack". It functions much like an "include" statement in any programming language.
We saw it in my /etc/pamd/sshd file:
auth required pam_stack.so service=system-auth
That says to look in /etc/pam.d/system-auth for other modules to use. Both login and sshd have this
b.sadhiq
83 www.altnix.com
line (as does just about every other file in /etc/pam.d/), so we can look in system-auth to see what
gets called by them:
#%PAM-1.0
# This file is auto-generated.
# User changes will be destroyed the next time authconfig is run.
Therefor, if we really wanted our time restrictions to apply to just about everything, we could add it
to system-auth. Note the warning about authconfig though, and also consider that you will be making
sudden sweeping changes to a LOT of applications and services.
Other
What if a PAM aware app doesn't have a file in /etc/pam.d? In that case, it uses the "other" file,
which looks like this by default:
#%PAM-1.0
auth required /lib/security/$ISA/pam_deny.so
account required /lib/security/$ISA/pam_deny.so
password required /lib/security/$ISA/pam_deny.so
session required /lib/security/$ISA/pam_deny.so
That "deny" module is a flat-out no access, red light, stop you dead right here module that is always
going to say no. That's excellent from a security point of view, but can be a bit harsh should you
accidentally delete something like "login". Login would now use the "other" file, and you couldn't
login. That could be unpleasant.
There are many, many useful and clever PAM modules. While our brain wave interpreter doesn't
exist yet, many other possibilities are available to you. There are modules to automatically black list
hosts that have many failed logins, and much more. See
http://www.kernel.org/pub/linux/libs/pam/modules.html.
b.sadhiq
84 www.altnix.com
Use of pam_listfile.so module
This PAM module authenticates users based on the contents of a specified file. For example, if
username exists in a file /etc/sshd/ssh.allow, sshd will grant login access.
You want to ALLOW a user to use ssh, if user-name exists in a file /etc/sshd/sshd.allow file.
Open /etc/pam.d/ssh (or /etc/pam.d/sshd for RedHat and friends)
# vi /etc/pam.d/ssh
Append following line:
auth required pam_listfile.so item=user sense=allow file=/etc/sshd/sshd.allow onerr=fail
b.sadhiq
85 www.altnix.com
Save and close the file.
Now add all usernames to /etc/sshd/sshd.allow file. Now a user is allowed to login via sshd if they
are listed in this file.
# vi /etc/sshd/sshd.allow
Append username per line:
tony
om
rocky
http://www.kernel.org/pub/linux/libs/pam/Linux-PAM-html/
http://www.kernel.org/pub/linux/libs/pam/Linux-PAM-html/Linux-PAM_MWG.html
LVM
b.sadhiq
86 www.altnix.com
Create Partitions
For this Linux lvm example you need an unpartitioned hard disk
/dev/sdb. First you need to create physical volumes. To do this you
need partitions or a whole disk. It is possible to run pvcreate
command on /dev/sdb, but I prefer to use partitions and from
partitions I later create physical volumes.
b.sadhiq
87 www.altnix.com
Partitions are ready to use.
$ pvdisplay
b.sadhiq
88 www.altnix.com
Alternatively the following command should be used:
$ pvdisplay /dev/sdb1
b.sadhiq
89 www.altnix.com
Feel free to add new physical volumes to a virtual group by using
the vgextend command.
From your big cake (virtual group) you can cut pieces (logical
volumes) which will be treated as a partitions for your linux
system. To create a logical volume, named "vol01", with a size of
b.sadhiq
90 www.altnix.com
400 MB from the virtual group "mynew_vg" use the following command:
b.sadhiq
91 www.altnix.com
$ lvremove /dev/mynew_vg/vol02
More workAround
More Workaround
1. Add another HDD which is connecting to motherboard as Primary
Slave
Therefore that disk should be readable by Linux as /dev/hdb
2. $ fdisk /dev/hdb
3. Create a single primary partition the size of entire disk and
change the ID of that partition to 8e.
4. Create PV of new drive which will be
$ pvcreate /dev/hdb1
5. Add this new PV into a existing VG (MYVOL)
$ vgextend myvol /dev/hdb1
6. Extend your Logical Volume into a existing LV
$ lvextend –L +2000M /dev/myvol/lv1
$resize2fs /dev/myvol/lv1
8. $mount –a
9. $ df –h
b.sadhiq
93 www.altnix.com
(a) Increase Your LVM size up to 6GB
$ umount /home
$ e2fsck –yc /dev/myvol/lv1
$ resize2fs /dev/myvol/lv1 1000M
$ lvreduce –L 1000M /dev/myvol/lv1
$ mount –a
$ df –h
Syntax of crontab
Your cron job looks like as follows:
1 2 3 4 5 /path/to/command arg1 arg2
Where,
1: Minute (0-59)
2: Hours (0-23)
3: Day (0-31)
4: Month (0-12 [12 == December])
5: Day of the week(0-7 [7 or 0 == sunday])
/path/to/command - Script or command name to schedule
Same above five fields structure can be easily remembered with following diagram:
* * * * * command to be executed
-----
|||||
| | | | ----- Day of week (0 - 7) (Sunday=0 or 7)
| | | ------- Month (1 - 12)
| | --------- Day of month (1 - 31)
| ----------- Hour (0 - 23)
------------- Minute (0 - 59)
Example(s)
b.sadhiq
95 www.altnix.com
If you wished to have a script named /root/backup.sh run every day at 3am, my crontab entry would
look like as follows:
Note:
A crontab entry is considered to match the current time when the min and hr fields match the
curr time and the mth field matches the current month
An entry is considered to match the current date when the day of month field [3rd] matches
the current day of the mth OR the day of week [5th] field matches the current day of the week:
IT IS NOT NECESSARY THAT BOTH THE DAY OF THE MTH AND DAY OF THEWEEK
MATCH!
If both the time and date match the current time and date the cmd is executed!
Never put a '*' in the first field unless u want the cmd to run every minute
You MAY hand-edit this file but it is never necessary since run-parts does everything. Simply
put a shell script in the appropriate /etc/cron.*/ dirs
Also the crond* daemon need not be restart. It will do just that every minute anyway
b.sadhiq
96 www.altnix.com
Example: Users often forget to shutdown their machines and go home. Hence, machine should
auto shutdown at 11 pm
/etc/crontab
Install your cronjob:# crontab –e
00 23 * * * root /sbin/shutdown -h now
b) Append following entry:
0 3 * * * /root/backup.sh
Run at 10 pm on weekdays:
0 22 * * 1-5 /path/to/command
If you run many sites, you can use this tip to make managing your cron jobs easier. To minimize the
clutter, create a /etc/cron.5min directory and have crontab read this directory every five minutes.
*/5 * * * * root run-parts /etc/cron.5min
45 * * * * /usr/bin/lynx -source http://example.com/cron.php
45 * * * * /usr/bin/wget -O - -q -t 1 http://www.example.com/cron.php
00 11,16 * * * /home/sadhiq/bin/incremental-backup
11,16 – 11 AM and 4 PM
* – Every day
* – Every month
00 09-18 * * * /home/ramesh/bin/check-db-status
b.sadhiq
97 www.altnix.com
* – Every day of the week
09-18 – 9 am, 10 am,11 am, 12 am, 1 pm, 2 pm, 3 pm, 4 pm, 5 pm, 6 pm
* – Every day
* – Every month
b.sadhiq
98 www.altnix.com
Run ntpdate every hour:
@hourly /path/to/ntpdate
Directory Description
/etc/cron.d/ Put all scripts here and call them from /etc/crontab file.
/etc/cron.daily/ Run all scripts once a day
/etc/cron.hourly/ Run all scripts once an hour
/etc/cron.monthly/ Run all scripts once a month
/etc/cron.weekly/ Run all scripts once a week
b.sadhiq
99 www.altnix.com
CROOT="/tmp/cachelighttpd/"
DAYS=10
LUSER="lighttpd"
LGROUP="lighttpd"
# start cleaning
/usr/bin/find ${CROOT} -type f -mtime +${DAYS} | xargs -r /bin/rm
AT
'at' executes a command once on a particular day, at a particular time. at will add a particular
command to be executed.
Examples:
$ at 21:30
b.sadhiq
100 www.altnix.com
You then type the commands you want executed then press the end-of-file character (normally
CTRL-D ). Also try:
$ at now + time
This will run at at the current time + the hours/mins/seconds you specify (use at now + 1 hour to
have command(s) run in 1 hour from now...)
You can also use the -f option to have at execute a particular file (a shell script).
$ at -f shell_script now + 1 hour
This would run the shell script 1 hour from now.
atq Will list jobs currently in queue for the user who executed it, if root executes at it will list all jobs
in queue for the at daemon. Doesn't need or take any options.
atrm Will remove a job from the 'at' queue.
Command syntax:
$ atrm job_no
Will delete the job "job_no" (use atq to find out the number of the job)
$ at -f myjobs.txt now + 1 hour
$ at -f myjob now + 1 min
$ at 10 am tomorrow
$ at 11:00 next month
$ at 22:00 today
$ at now + 1 week
$ at noon
Anacron
anacron is another tool designed for systems which are not always on, such as home computers.
While cron will not run if the computer is off, anacron will simply run the command when the
computer is next on (it catches up with things).
b.sadhiq
101 www.altnix.com
Quota
Important Note:
1. Quotas can only be created for partitions.
2. Quota is of two types, user and group.
3. If 1 MB quota is set for the partition /home, then every directory under /home or every user
on the system, since each directory in /home represents an user, can use a max of 1 MB.
Enabling Quotas
1. Go to /etc/fstab and in the permissions field, enter "usrquota" followed by a "," for the partition
where you want to enable quota in our case /home.
Note: If you want to enable group quota then enter "grpquota" instead of "usrquota"
Reboot and directly jump to step 5! else...
2. Unmount and mount /home for the changes to take effect
$ umount /home
$ mount /home
or
$ mount -o remount /home
Note:If the system is rebooted after step 2 skip step 3&4 and jump to step 5.
3. To scan /home and enable quota
$ quotacheck -vcu /home
b.sadhiq
102 www.altnix.com
4. To turn on quota on /home
$ quotaon -v /home
5. To check if quota is on or not
$ repquota -a
Implementing Quotas
6. To edit quota for a user
$ edquota -u <username>
Note: u stands for user, for group type g and give groupname
7. To edit grace period
$ edquota -t
8. To copy a quota setting of one user to another user
$ edquota -p <source_user> <user> OR
Set user level quota on user neo restricting the size below 70k
$ edquota -u neo
This opens up a temp file under /tmp and vi as a editor
Disk quotas for user neo (uid 529):
<---- file size quota ----------> | <------ No. of files quota -->
Filesystem blocks soft hard inodes soft hard
/dev/hda7 11 50 69 11 0 0
b.sadhiq
104 www.altnix.com
As a root user you would be interested in viewing the quota statistics on user level basis.
# repquota -a
How to imply the quota settings meant for user neo onto user champu
# edquota -p neo jane
Commands
quota - display disk usage and limits
rquota - implement quotas on remote machines
fstab - static information about the filesystems
b.sadhiq
105 www.altnix.com
edquota - edit user quotas
setquota - set disk quotas (Command line editor)
quotacheck - scan a filesystem for disk usage, create, check and repair quota files
quotaon - turn filesystem quotas on
quotaoff - turn filesystem quotas off
Kernel Compilation
if you want to update the kernel from new source code you have downloaded, or you have applied a
patch to add new functionality or hardware support, you will need to compile and install a new kernel
to actually use that new functionality. Compiling the kernel involves translating the kernel's contents
from human-readable code to binary form. Installing the kernel involves putting all the compiled
files where they belong in /boot and /lib and making changes to the bootloader.
The process of compiling the kernel is almost completely automated by the make utility as is the
process of installing. By providing the necessary arguments and following the steps covered next,
you can recompile and install a custom kernel for your use.
b.sadhiq
106 www.altnix.com
Structure of monolithic and microkernel-based operating systems, respectively
Compilation
Steps to compile kernel - Redhat 9
Install dep
kernel-source-2.4.20-8.i386.rpm
binutils-2.13.90.0.18-9.i386.rpm
glibc-kernheaders-2.4-8.10.i386.rpm
cpp-3.2.2-5.i386.rpm
gcc-3.2.2-5.i386.rpm
glibc-2.3.2-11.9.i386.rpm
libgcc-3.2.2-5.i386.rpm
glibc-common-2.3.2-11.9.i386.rpm
ncurses-5.3-4.i386.rpm
glibc-devel-2.3.2-11.9.i386.rpm
ncurses-devel-5.3-4.i386.rpm
1) Go to /usr/src/linux-2.4/
cp -p configs/kernel-2.4.18-i686.config .config
b.sadhiq
107 www.altnix.com
6) make config / make menuconfig (for Text) / make xconfig - make necessary changes – enable
ntfs – disable sound & bluetooth
8) make clean - cleans unwanted files form memory loaded by above commands.
12) cp /usr/src/linux-2.4.20-8/arch/i386/boot/bzImage
/boot/vmlinuz-2.4.20-8champu
13) cp /usr/src/linux-2.4.18-14/System.map
/boot/System.map-2.4.20-8champu
17) reboot
Kernel Definition
http://www.linfo.org/kernel.html
Kernel compilation
http://www.cyberciti.biz/tips/compiling-linux-kernel-26.html
http://book.opensourceproject.org.cn/distrib/ubuntu/unleashed/opensource/0672329093/ch35lev1sec
7.html
http://wiki.centos.org/HowTos/Custom_Kernel
This is one the essential and important task. Many time we upgrade our kernel and some precompiled
drivers won't work with Linux. Especially if you have weird hardware; then vendor may send you
driver code aka C files to compile. Or even you can write your own Linux kernel driver. Compiling
kernel driver is easy. Kernel 2.6.xx makes it even much more easier. Following steps are required to
compile driver as module:
1) You need running kernel source code; if you don't have a source code download it from
kernel.org. Untar kernel source code (tar ball) in /usr/src using tar command:
b.sadhiq
109 www.altnix.com
$ tar -zxvf kernel* -C /usr/src
To be frank kernel headers are more than sufficient to compile kernel modules / drivers. See how to
install kernel headers under Debian / Ubuntu Linux or RHEL / CentOS / Fedora Linux.
Next go to your kernel module source code directory and simply create the Makefile file as follows
(assuming your kernel module name is foo):
$ vi Makefile
4) Compile module using make command (module build can be done by any user) :
$ make
It will finally creates the foo.ko module in current directory. You can see all actual compile
command stored in .foo* files in same directory.
5) Once module compiled successfully, load it using insmod or modprobe command. You need to be
root user or privileged user to run insmod:
# insmod foo.ko
module_init(hello_start);
module_exit(hello_end);
10) Load module when Linux system comes up. File /etc/modules use to load kernel boot time. This
file should contain the names of kernel modules that are to be loaded at boot time, one per line. First
copy your module to /lib/modules/$(uname -r)/kernel/drivers. Following are suggested steps:
(a) Create directory for hello module:
b.sadhiq
111 www.altnix.com
$ mkdir -p /lib/modules/$(uname -r)/kernel/drivers/hello
(b) Copy module:
$ cp hello.ko /lib/modules/$(uname -r)/kernel/drivers/hello/
(c) Edit /etc/modules file under Debian Linux:
$ vi /etc/modules
(d) Add following line to it:
hello
(e) Reboot to see changes. Use lsmod or dmesg command to verify module loaded or not.
$ cat /proc/modules
OR
$ lsmod | less
Most people have a fairly recent kernel. But since the kernel is constantly being updated, people on
modems (such as myself) don't like downloading the whole source everytime a new version of the
kernel comes out... It is a pain to download 14+ megs of stuff when 95% of it is the same stuff that
you already have in your kernel source diectory.
For this reason, kernel patches are released. Kernel patches contain only the files that have changed
since the last kernel, hence making it less of a pain to upgrade.
It is a good idea to back up your old kernel tree before you do anything to it, just in case something
messes up. To do this, do the following:
Become root and then go into your kernel source directory (for me it was /usr/src/linux-2.2.10) and
do a 'make clean' to clean it up so you don't compress a lot of crap you don't need as follows
# cd /usr/src/linux-2.2.10
# make clean
Now you need to go to backup the tree, I did this by doing the following:
# cd /usr/src/
# tar zcvf linux-2.2.10-tree.tar.gz linux-2.2.10
Now with that backed up, you can go ahead and change the stuff with less worrying...
If you have kernel 2.2.10, like I did, and 2.2.12 is the current stable release (or at least it is as I am
writing this) you need all of the patch files after 2.2.10. So in my case, I needed to get patch-
2.2.11.gz and patch-2.2.12.gz
http://www.kernelnotes.org is where I got mine from, but I'm sure there are mirrors where you can
get the patches from, more on this is on www.kernelnotes.org.
Note: When I downloaded this file using netscape, it un-gzipped it for me as it was downloading... so
I didn't have to do the following step that you would have to do if you were using a program such as
'ftp'
un-gzip the file by doing the following:
# gzip -d patch-2.2.11.gz
# gzip -d patch-2.2.12.gz
b.sadhiq
112 www.altnix.com
This will leave you with patch-2.2.11 and patch-2.2.12 (unless you downloaded the file with
netscape, and this step would already have been done for you)
Now move the files to your kernel source directory (using the mv command,
mv patch-2.2.* /usr/src/linux-2.2.10
Now change into your kernel source directory (/usr/src/linux-2.2.10 in my case)
Now you need to apply the patch the the source... Order is important here. Start with the lowest and
go to the highest, like the following:
# patch -p1 < patch-2.2.11
# patch -p1 < patch-2.2.12
Both of these commands will give you lots of output telling you what files are being patched, etc.
After I applied the patches, I went ahead and renamed my source directory to reflect the patches
applied (mv /usr/src/linux-2.2.10 /usr/src/linux-2.2.12) and then I removed the old /usr/src/linux link
and replaced it with the new location (rm /usr/src/linux and then ln -s /usr/src/linux-2.2.12
/usr/src/linux)
Now just compile your kernel
Patch
http://www.cyberciti.biz/tips/how-to-patch-running-linux-kernel.html
Patch O Matic
http://www.fifi.org/doc/iptables-dev/html/netfilter-extensions-HOWTO-2.html
Faq
http://kernelnewbies.org/FAQ
b.sadhiq
113 www.altnix.com
Kernel Tuning
Kernel tuning with sysctl
The Linux kernel is flexible, and you can even modify the way it works on the fly by dynamically
changing some of its parameters, thanks to the sysctl command. Sysctl provides an interface that
allows you to examine and change several hundred kernel parameters in Linux or BSD. Changes
take effect immediately, and there's even a way to make them persist after a reboot. By using sysctl
judiciously, you can optimize your box without having to recompile your kernel, and get the results
immediately.
To start getting a taste of what sysctl can modify, run sysctl -a and you will see all the possible
parameters. The list can be quite long: in my current box there are 712 possible settings.
$ sysctl -a
kernel.panic = 0
kernel.core_uses_pid = 0
kernel.core_pattern = core
kernel.tainted = 129
If you want to get the value of just a single variable, use something like sysctl vm.swappiness, or just
sysctl vm to list all variables that start with "vm." Add the -n option to output just the variable
values, without the names; -N has the opposite effect, and produces the names but not the values.
You can change any variable by using the -w option with the syntax sysctl -w variable=value. For
example, sysctl -w net.ipv6.conf.all.forwarding=1 sets the corresponding variable to true (0 equals
"no" or "false"; 1 means "yes" or "true") thus allowing IP6 forwarding. You may not even need the
-w option -- it seems to be deprecated. Do some experimenting on your own to confirm that.
sysctl values are loaded at boot time from the /etc/sysctl.conf file. This file can have blank lines,
comments (lines starting either with a "#" character or a semicolon), and lines in the
"variable=value" format. For example, my own sysctl.conf file is listed below. If you want to apply
it at any time, you can do so with the command sysctl -p.
# Disable response to broadcasts.
net.ipv4.icmp_echo_ignore_broadcasts = 1
b.sadhiq
114 www.altnix.com
# enable route verification on all interfaces
net.ipv4.conf.all.rp_filter = 1
Increasing the TCP send and receive buffers will increase the performance a lot if (and only if) you
have a lot of large files to send.
* net.ipv4.tcp_wmem = 4096 65536 524288
* net.core.wmem_max = 1048576
If you have a lot of large file uploads, increasing the receive buffers will help.
* net.ipv4.tcp_rmem = 4096 87380 524288
* net.core.rmem_max = 1048576
# These ensure that TIME_WAIT ports either get reused or closed fast.
net.ipv4.tcp_fin_timeout = 1
net.ipv4.tcp_tw_recycle = 1
# TCP memory
net.core.rmem_max = 16777216
net.core.rmem_default = 16777216
net.core.netdev_max_backlog = 262144
net.core.somaxconn = 262144
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_orphans = 262144
b.sadhiq
115 www.altnix.com
net.ipv4.tcp_max_syn_backlog = 262144
net.ipv4.tcp_synack_retries = 2
net.ipv4.tcp_syn_retries = 2
# you shouldn’t be using conntrack on a heavily loaded server anyway, but these are
# suitably high for our uses, insuring that if conntrack gets turned on, the box doesn’t die
net.ipv4.ip_conntrack_max = 1048576
net.nf_conntrack_max = 1048576
the third:
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_syncookies = 1
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
Refrence
b.sadhiq
116 www.altnix.com
http://shebangme.blogspot.com/2010/07/kernel-sysctl-configuration-file-for.html
swappiness
http://www.linux.com/archive/feature/146599
Device driver
http://en.wikipedia.org/wiki/Device_driver
Lsmod
http://en.wikipedia.org/wiki/Lsmod
Modprobe
http://en.wikipedia.org/wiki/Modprobe
Oracle + sysctl
http://www.puschitz.com/TuningLinuxForOracle.shtml
http://www.puschitz.com/TuningLinuxForOracle.shtml#SettingSHMMAXParameter
http://www.puschitz.com/TuningLinuxForOracle.shtml#TheSEMMSLParameter
Refrence
http://www.linux.com/archive/feature/126718
http://www.fcicq.net/wp/?p=197
http://www.cyberciti.biz/tips/linux-procfs-file-descriptors.html
http://en.opensuse.org/Kernel_module_configuration
http://www.cyberciti.biz/tips/blade-server-disable-floppy-driver-module.html
Blacklist
Just open your /etc/modprobe.conf file and turn of auto loading using
following syntax:
alias driver-name off
If you are using Debian / Ubuntu Linux...
Kernel offers you something that allows you to recover your system from a crash or at the least lets
you to perform a proper shutdown using the Magic SysRq Keys. The magic SysRq key is a select
key combination in the Linux kernel which allows the user to perform various low level commands
regardless of the system’s state using the SysRq key. It is often used to recover from freezes, or to
reboot a computer without corrupting the filesystem.
The magic Sysrq key basically has a key combination of <ALT> + <SysRq or Prnt Scrn> +
<Command key>.
The command key can be one of the following providing a specific functionality
‘b’ – Will immediately reboot the system without syncing or unmounting your disks.
‘h’ – Will display help (actually any other key than those listed here will display help. but ‘h’
is easy to remember
b.sadhiq
118 www.altnix.com
‘i’ – Send a SIGKILL to all processes, except for init.
‘k’ – Secure Access Key (SAK) Kills all programs on the current virtual console. NOTE: See
important comments below in SAK section.
‘o’ – Will shut your system off (if configured and supported).
‘p’ – Will dump the current registers and flags to your console.
‘t’ – Will dump a list of current tasks and their information to your console.
’0′-’9′ – Sets the console log level, controlling which kernel messages will be printed to your
console. (’0′, for example would make it so that only emergency messages like PANICs or
OOPSes would make it to your console.)
Ref -
http://www.susegeek.com/general/linux-kernel-magic-sysrq-keys-in-opensuse-for-crash-recovery/
http://www.cyberciti.biz/tips/reboot-linux-box-after-a-kernel-panic.html
http://www.cyberciti.biz/tips/reboot-or-halt-linux-system-in-emergency.html
b.sadhiq
119 www.altnix.com
A driver typically communicates with the device through the computer bus or communications
subsystem to which the hardware connects. When a calling program invokes a routine in the driver,
the driver issues commands to the device. Once the device sends data back to the driver, the driver
may invoke routines in the original calling program. Drivers are hardware-dependent and operating-
system-specific. They usually provide the interrupt handling required for any necessary
asynchronous time-dependent hardware interface.
Operating systems
# mknod /dev/ttyS0 c 4 64
# chown root.dialout /dev/ttyS0
# chmod 0644 /dev/ttyS0
# ls -l /dev/ttyS0
crw-rw---- 1 root dialout 4, 64 Oct 23 18:23 /dev/ttyS0
b.sadhiq
120 www.altnix.com
As you can see, many more steps are required to create the file. In this example you can see the
process required however. It is unlikely in the extreme that the ttyS0 file would not be provided by
the MAKEDEV script, but it suffices to illustrate the point.
$mknod /opt/champu b 3 10
$mount /opt/champu /home
1. lsmod
2. insmod
3. rmmod
4. modprobe
5. modinfo
6. depmod
lsmod is a command on Linux systems which prints the contents of the /proc/modules file. It shows
which loadable kernel modules are currently loaded.
Abridged example output:
# lsmod
Module Size Used by
af_packet 27392 2
8139too 30592 0
snd_cs46xx 96872 3
snd_pcm_oss 55808 1
snd_mixer_oss 21760 2 snd_pcm_oss
ip6table_filter 7424 1
ip6_tables 19728 1 ip6table_filter
ipv6 290404 22
xfs 568384 4
sis900 18052 5
libata 169920 1 pata_sis
scsi_mod 158316 3 usb_storage,sd_mod,libata
usbcore 155312 6 ohci_hcd, usb_storage, usbhid
lsmod
First column is Module name and second column is size of modules i..e
the output format is module name, size, use count, list of referring
modules.
modprobe is a Linux program originally written by Rusty Russell used to add a loadable kernel
module (LKM) to the Linux kernel or remove an LKM from the kernel. It is commonly used
indirectly as udev relies upon modprobe to load drivers for automatically detected hardware.
b.sadhiq
121 www.altnix.com
Networking
Tools
$ifconfig
$neat-tui
$/etc/sysconfig/network-scripts/ifcfg-eth0
$netconfig
$ethtool
$ip r l
$telnet
$nmap
$netstat
$ping
$route
$traceroute
$tcpdump – n/w traffic tool
$iptraf - Monitor n/w traffic.curses-based tool - Self-explanatory
$etheral - Network Analyzers which does data capture and filtering
$tethral - Captures and displays only the high level protocols
b.sadhiq
122 www.altnix.com
eth1 Link encap:Ethernet HWaddr 00:60:CC:AA:2C:9C
inet addr:192.168.0.20 Bcast:192.168.0.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:4 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:100
RX bytes:0 (0.0 b) TX bytes:240 (240.0 b)
Interrupt:11 Base address:0xc000
b.sadhiq
123 www.altnix.com
$ ifconfig eth0 IP:x --> Set eth0 to multiplexed IP
$ ifconfig eth0 down --> Bring eth0 down
$ ifdown eth0 --> ditto
$ ifconfig eth0 up --> Bring eth0 up
$ ifup eth0 --> ditto
$ ifconfig eth0 -arp --> Disable use of arp protocol on this interface
$ man ethtool
b.sadhiq
125 www.altnix.com
There are two types of ping -
The std Unix ping which sends a ICMP ECHO REQUEST and receives a ICMP ECHO REPLY frim
the remote host if it is UP and running
The other is to send a UDP or TCP pkt to port 7 [echo] of the remote host and see that whatever
you type is echoed back. The host is UP.
b.sadhiq
126 www.altnix.com
When a pkt bound for some other host arrives, the path is found by matching the dest IP addr against
the Kernel Routing Table [KRT]. If it matches a route in the KRT, the pkt is fwd'ed to the 'next-hop
gateway' IP addr associated with the route.
Two special cases are possible here:
Case I: pkt may be destined for some host on a directly connected NW. In this case the 'next-
hop gateway' IP addr in the KRT will be one of the localhosts own interfaces and the pkt is sent
directly to its dest. The type of route is what you normally do with the ifconfig cmd when you
configure and interface.
Case II: No route in the KRT matches the dest addr that the pkt wishes to reach. The default
route [Gateway] is invoked. Or an error. Most NWs have only one way out and that is the default
route. On the Internet backbone, the routers do not have default routes. The buck stops here. If they
do not have a routing entry for a dest, the dest cannot be reached and a "network unreachable" ICMP
error is sent to the sender
The KRT contains info like "To get to NW X from m/c Y, send pkt to m/c Z with a cost of 1
[metric], alongwith TTL and reliability values for that route.
Routing Policy:
Static routes : For small unconnected NWs
Dynamic routes : Many subnets, large NWs, connected to the Internet
Static/Dyn :
$ route
$ route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
1. 132.236.227.0 132.236.227.93 255.255.255.0 U 0 0 0 eth0
b.sadhiq
127 www.altnix.com
2. 132.236.212.0 132.236.212.1 255.255.255.192 U 0 0 0 eth1
3. 127.0.0.1 0.0.0.0 255.255.255.0 U 0 0 0 lo
4. default 132.236.227.1 0.0.0.0 UG 0 0 0 eth0
5. 132.236.220.64 132.236.212.6 255.255.255.192 UG 0 0 0 eth1
Routes 1 and 2 were added by ifconfig when the eth0 and eth1 interfaces were configured at bootup
This means to reach machine 132.236.227.93 on the NW 132.236.227.0 the GW is machine
132.236.227.93 - the machine itself is its GW which implies it can be reached directly on this NW
and one has to go to no other m/c to consult.
Ditto for the next one.
Route 3 is the loopback interface, a pseudo-device that prevents pkts sent from the host to itself from
going out on the NW; instead, they are transferred directly route add default gw 132.236.227.1 eth0
Route 4 is the default route.
It says :
Pkts not explicitly addressed to any of the 3 NWs listed [or to the m/c itself] will be sent to the
default GW host, 132.236.227.1
Route 5 says :
To reach NW 132.236.220.64/26, pkts must be sent GW host 132.236.212.6 thru eth1.
Debug Note :
If you find a connection that stays in this state, then a local process is trying very hard to
contact a non-existent or inaccessible NW server.
SYN_RECV
A connection request has been received from a remote host and is being initialized
FIN_WAIT1
The socket is closed, and the connection is shutting down.
FIN_WAIT2
Connection is closed, and the socket is waiting for a shutdown from the remote end.
TIME_WAIT
The socket is waiting after close to handle packets still in thenetwork.
CLOSED The socket is not being used.
CLOSE_WAIT
The remote host end has shut down its connection, and the localhost is waiting for the
socket to close.
LAST_ACK
b.sadhiq
129 www.altnix.com
The remote end has shut down, and the socket is closed. Waiting for acknowledgement.
LISTEN The socket is listening for incoming connections.Specify -l option to see this.
CLOSING
Both sockets are shut down but we still don’t have all our data sent.
UNKNOWN
The state of the socket is unknown.
USER The login ID of the user who owns the socket
b.sadhiq
130 www.altnix.com
FTP
Active FTP
Passive FTP
Users
Regular FTP
Anonymous FTP
Vsftpd.conf
anon_root=/data/directory
# Allow anonymous FTP?
anonymous_enable=YES
# The directory which vsftpd will try to change into after an anonymous login. (Default = /var/ftp)
anon_root=/data/directory
# Uncomment this to allow local users to log in.
local_enable=YES
# Uncomment this to enable any form of FTP write command.
# (Needed even if you want local users to be able to upload files)
write_enable=YES
# Uncomment to allow the anonymous FTP user to upload files. This only
# has an effect if global write enable is activated. Also, you will
# obviously need to create a directory writable by the FTP user.
# anon_upload_enable=YES
# Uncomment this if you want the anonymous FTP user to be able to create
# new directories.
# anon_mkdir_write_enable=YES
# Activate logging of uploads/downloads.
xferlog_enable=YES
# You may override where the log file goes if you like.
b.sadhiq
131 www.altnix.com
# The default is shown below.
xferlog_file=/var/log/vsftpd.log
Descriptions on this and more can be found in the vsftpd.conf man pages.
Anonymous upload
mkdir /var/ftp/pub/upload
chmod 722 /var/ftp/pub/upload
ftpd_banner= New Banner Here
write_enable = NO
Check files under the following
$ cd /etc/vsftpd/
$ ls
ftpusers users_list vsftpd.conf vsftpd.conf_migrate.sh
b.sadhiq
132 www.altnix.com
Types of FTP
From a networking perspective, the two main types of FTP are active and passive. In active FTP, the
FTP server initiates a data transfer connection back to the client. For passive FTP, the connection is
initiated from the FTP client. These are illustrated in Figure 15-1.
Figure 15-1 Active and Passive FTP Illustrated
From a user management perspective there are also two types of FTP: regular FTP in which files are
transferred using the username and password of a regular user FTP server, and anonymous FTP in
which general access is provided to the FTP server using a well known universal login method.
Take a closer look at each type.
Active FTP
The sequence of events for active FTP is:
1. Your client connects to the FTP server by establishing an FTP control connection to port 21
of the server. Your commands such as 'ls' and 'get' are sent over this connection.
2. Whenever the client requests data over the control connection, the server initiates data
transfer connections back to the client. The source port of these data transfer connections is always
port 20 on the server, and the destination port is a high port (greater than 1024) on the client.
3. Thus the ls listing that you asked for comes back over the port 20 to high port connection, not
the port 21 control connection.
FTP active mode therefore transfers data in a counter intuitive way to the TCP standard, as it selects
port 20 as it's source port (not a random high port that's greater than 1024) and connects back to the
client on a random high port that has been pre-negotiated on the port 21 control connection.
Active FTP may fail in cases where the client is protected from the Internet via many to one NAT
(masquerading). This is because the firewall will not know which of the many servers behind it
should receive the return connection.
b.sadhiq
133 www.altnix.com
Passive FTP
Passive FTP works differently:
1. Your client connects to the FTP server by establishing an FTP control connection to port 21
of the server. Your commands such as ls and get are sent over that connection.
2. Whenever the client requests data over the control connection, the client initiates the data
transfer connections to the server. The source port of these data transfer connections is always a high
port on the client with a destination port of a high port on the server.
Passive FTP should be viewed as the server never making an active attempt to connect to the client
for FTP data transfers. Because client always initiates the required connections, passive FTP works
better for clients protected by a firewall.
As Windows defaults to active FTP, and Linux defaults to passive, you'll probably have to
accommodate both forms when deciding upon a security policy for your FTP server.
Regular FTP
By default, the VSFTPD package allows regular Linux users to copy files to and from their home
directories with an FTP client using their Linux usernames and passwords as their login credentials.
VSFTPD also has the option of allowing this type of access to only a group of Linux users, enabling
you to restrict the addition of new files to your system to authorized personnel.
The disadvantage of regular FTP is that it isn't suitable for general download distribution of software
as everyone either has to get a unique Linux user account or has to use a shared username and
password. Anonymous FTP allows you to avoid this difficulty.
Anonymous FTP
Anonymous FTP is the choice of Web sites that need to exchange files with numerous unknown
remote users. Common uses include downloading software updates and MP3s and uploading
diagnostic information for a technical support engineers' attention. Unlike regular FTP where you
login with a preconfigured Linux username and password, anonymous FTP requires only a username
of anonymous and your email address for the password. Once logged in to a VSFTPD server, you
automatically have access to only the default anonymous FTP directory (/var/ftp in the case of
VSFTPD) and all its subdirectories.
Solution
Table shows the general rules you'll need to allow FTP clients through a firewall:
method Source Address ce Port Destination Address Destination Port Connection Type
Allow outgoing control connections to server
1
Channel FTP client/network High FTP server2 21 New
FTP server2 21 FTP client/network High Established3
Allow the client to establish data channels to remote server
Active FTP server2 20 FTP client / network High New
FTP
FTP client/network High FTP server 2 20 Established3
FTP FTP client/network High FTP server 2 High New
FTP server2 High FTP client/network High Established 3
b.sadhiq
135 www.altnix.com
1
Greater than 1024.
2
In some cases, you may want to allow all Internet users to have access, not just a specific client
server or network.
3
Many home-based firewall/routers automatically allow traffic for already established connections.
This rule may not be necessary in all cases.
Server Protected By A Firewall Problem
Typically firewalls don't let any connections come in at all. When a an incorrectly configured
firewall protects an FTP server, the FTP connection from the client doesn't appear to work at all for
both active and passive FTP.
Solution
b.sadhiq
136 www.altnix.com
chrooted ftp users
NFS
NFSv2 uses the User Datagram Protocol (UDP) to provide a stateless network connection between
the client and server. NFSv3 can use either UDP or Transmission Control Protocol (TCP) running
over an IP network
/lib/modules/2.4.20-8/kernel/fs/nfsd/nfsd.
NFS port – 2049
NFS relies on Remote Procedure Calls (RPC) to route requests between clients and servers. RPC
services under Linux are controlled by the portmap service. To share or mount NFS file systems, the
following services work together:
nfs - Starts the appropriate RPC processes to service requests for shared NFS file systems.
nfslock - An optional service that starts the appropriate RPC processes to allow NFS clients to
lock files on the server.
portmap - The RPC service for Linux; it responds to requests for RPC services and sets up
connections to the requested RPC service.
The following RPC processes work together behind the scenes to facilitate NFS services:
rpc.mountd - This process receives mount requests from NFS clients and verifies the
requested file system is currently exported. This process is started automatically by the nfs service
and does not require user configuration.
rpc.nfsd - This process is the NFS server. It works with the Linux kernel to meet the dynamic
demands of NFS clients, such as providing server threads each time an NFS client connects. This
process corresponds to the nfs service.
b.sadhiq
137 www.altnix.com
rpc.lockd - An optional process that allows NFS clients to lock files on the server. This
process corresponds to the nfslock service.
rpc.statd - This process implements the Network Status Monitor (NSM) RPC protocol which
notifies NFS clients when an NFS server is restarted without being gracefully brought down. This
process is started automatically by the nfslock service and does not require user configuration.
rpc.rquotad - This process provides user quota information for remote users. This process is
started automatically by the nfs service and does not require user configuration.
Because RPC-based services rely on portmap to make all connections with incoming client requests,
portmap must be available before any of these services start.
The portmap service uses TCP wrappers for access control, and access control rules for portmap
affect all RPC-based services. Alternatively, it is possible to specify access control rules for each of
the NFS RPC daemons. The man pages for rpc.mountd and rpc.statd contain information regarding
the precise syntax for these rul
/etc/hosts
1. /var/lib/nfs/xtab
2./var/lib/nfs/rmtab
rpc.mountd
The rpc.mountd program implements the NFS mount protocol. When receiving a MOUNT request
from an NFS client, it checks the request against the list of currently exported file systems. If the
client is permitted to mount the file system, rpc.mountd obtains a file handle for requested
directory and returns it to the client.
Alternatively, you can export individual directories temporarily using exportfs host:/directory syntax.
uid matching and readonlyfilesystem
Portmap is a server that converts RPC program numbers into DARPA protocol port numbers. It must
be running in order to make RPC calls.
When an RPC server is started, it will tell portmap what port number it is listening to, and what RPC
program numbers it is prepared to serve. When a client wishes to make an RPC call to a given
program number, it will first contact portmap on the server machine to determine the port number
where RPC packets should be sent.
b.sadhiq
138 www.altnix.com
$ /etc/exports
/data/files *(ro,sync)
/home 192.168.1.0/24(rw,sync)
/data/test *.my-site.com(rw,sync)
/data/database 192.168.1.203/32(rw,sync)
/etc/exports option
$cat /proc/fs/nfs/exports (runtime check)
Try the following
Edit exports file
/opt/data 192.168.1.65(rw,wdelay,root_squash,no_subtree_check,anonuid=65534,anongid=65534)
192.168.1.45 (rw,no_root_squash,)
try mounting from the client, in my case client ip is 192.168.1.65 and check the permissons by
root and user also try from the client 192.168.1.45 and chk permissions
Client
Mount 192.168.1.75:/opt/data /data
More options
/opt/data station1 (rw,wdelay,all_squash,anonuid=150,anongid=100)
Now try mounting from 192.168.1.65 (point station1 --> 192.168.1.65 in /etc/hosts file) and chk
the permisson by login user and root.
More options
we can also restrict by domain.
/opt/deploy/stic *.example.com (rw) *.workgroup.com (ro,sync)
b.sadhiq
139 www.altnix.com
all_squash
Map all uids and gids to the anonymous user. Useful for NFS-exported public FTP directories,
news spool directories, etc.The opposite option is no_all_squash, which is the default set-ting.
anonuid and anongid
These options explicitly set the uid and gid of the anonymous account. This option is primarily
useful for PC/NFS clients,where you might want all requests appear to be from one user. As an
example, consider the export entry for /home/joe in the exam-ple section below, which maps all
requests to uid 150 (which is supposedly that of user joe).
General Options
exportfs understands the following export options:
secure
This option requires that requests originate on an internet port less than IPPORT_RESERVED
(1024). This option is on by default. To turn it off, specify insecure.
rw
Allow both read and write requests on this NFS volume. The default is to disallow any request
which changes the filesystem. This can also be made explicit by using the ro option.
async
This option allows the NFS server to violate the NFS protocol and reply to requests before any
changes made by that request have been committed to stable storage (e.g. disc drive).Using this
option usually improves performance, but at the cost that an unclean server restart (i.e. a crash) can
cause data to be lost or corrupted.
sync
Reply to requests only after the changes have been committed to stable storage (see async
above).In releases of nfs-utils upto and including 1.0.0, this option was the default. In this and future
releases, sync is the default, and async must be explicit requested if needed.To help make system
adminstrators aware of this change, ’exportfs’ will issue a warning if neither sync nor async is
specified.
no_wdelay
This option has no effect if async is also set. The NFS server will normally delay
committing a write request to disc slightly if it suspects that another related write request may be
in progress or may arrive soon. This allows multiple write requests to be committed to disc
with the one operation which can improve performance. If an NFS server received mainly small
unrelated requests, this behaviour couldactually reduce perfor-mance, so no_wdelay is available to
turn it off. The default can be explicitly requested with the wdelay option.
b.sadhiq
140 www.altnix.com
subtree_check
This option enables subtree checking, which does add another level of security, but can be
unreliability in some circum-stances.If a subdirectory of a filesystem is exported, but the whole
filesystem isn’t then whenever a NFS request arrives, the server must check not only that the
accessed file is in the appropriate filesystem (which is easy) but also that it is in the exported tree
(which is harder). This check is called the subtree_check.
In order to perform this check, the server must include some information about the location of
the file in the "filehandle" that is given to the client. This can cause problems with accessing
files that are renamed while a client has them open (though in many simple cases it will still work).
Try Mount Options in fstab
192.168.1.33:/opt/deploy/stic /mnt nfs rw,hard,intr,rsize=8192,wsize=8192 0 0
bg
Retry mounting in the background if mounting initially fails
fg
Mount in the foreground
soft Use soft mounting
If an NFS file operation has a major timeout then report an I/O error to the calling program. The
default is to continue retrying NFS file operations indefinitely.
hard Use hard mounting
If an NFS file operation has a major timeout then report "server not responding" on the console and
continue retrying indefinitely. This is the default.
rsize=n
The amount of data NFS will attempt to access per read operation. The default is dependent on the
kernel. For NFS version 2, set it to 8192 to assure maximum throughput.
wsize=n
The amount of data NFS will attempt to access per write operation. The default is dependent on the
kernel. For NFS version 2, set it to 8192 to assure maximum throughput.
nfsvers=n
The version of NFS the mount command should attempt to use tcp Attempt to mount the
filesystem using TCP packets: the default is UDP.
intr
If the filesystem is hard mounted and the mount times out, allow for the process to be aborted using
the usual methods such as CTRL-C and the kill command.
b.sadhiq
141 www.altnix.com
nolock
Disable NFS locking. Do not start lockd. This has to be used with some old NFS servers that don’t
support lock-ing.
noexec - Prevents execution of binaries on mounted file systems. This is useful if the system is
mounting a non-Linux file system via NFS containing incompatible binaries.
intr - Allows NFS requests to be interrupted if the server goes down or cannot be reached.
nfsvers=2 or nfsvers=3 - Specifies which version of the NFS protocol to use.
nosuid - Disables set-user-identifier or set-group-identifier bits. This prevents remote users from
gaining higher privileges by running a setuid program.
There are many other options. But the above ones are very important.
Practicals:
On server pc (server ip – 192.168.1.40)
First check nfs install or not
$ rpm -qa | grep -i nfs
nfs-utils-1.0.9-44.el5
b.sadhiq
142 www.altnix.com
nfs-utils-lib-1.0.8-7.6.el5
if you have installation dvd then run following command:
$ rpm -ivh nfs-utils-1.0.9-44.el5
/mnt/test *
/opt/funny *
b.sadhiq
143 www.altnix.com
$nfsstat nfs statistics.
From client side (client ip – 192.168.1.50)
$ service nfs start (as a root)
$ showmount -e 192.168.1.40
/mnt/test *
/opt/funny *
Mount the shared partition as follow:
$ mount 192.168.1.40:/mnt/test /mnt/
$ cd /mnt/
$ ls
Create any file in client side and check on server pc you can see the same.
For permanent mount the directory type following entry in fstab file:
$ vi /etc/fstab
192.168.1.40:/mnt/test /mnt nfs _netdev,defaults 0 0
192.168.1.40:/opt/funny /opt nfs _netdev,defaults 0 0
b.sadhiq
144 www.altnix.com
Network Info Service (NIS)
What does it do ?
Advantages of NIS :
Alert : Disable Firewalls or NIS will not work since, by default, RPC services are blocked by FWs
[in RHL]
Exam Alert :
1. Make sure you put FWs off before doing anything in NIS
2. Make sure you start portmap BEFORE you start NFS
3. Make sure you use 'showmount -e' before you do any 'mount'
4. All errors are mostly due to incorrect syntax in /etc/exports and especially watch out for the
space between the options ()
5. If you use 'authconfig' for client config, make sure you disable it in 'ntsysv' imasap
6. Check /etc/securenets if you are being blocked or wish to
7. Check /etc/ypserv.conf if you cannot query maps or wish to
b.sadhiq
145 www.altnix.com
rpcinfo -p localhost
program vers proto port
100000 2 tcp 111 portmapper
100000 2 udp 111 portmapper
100009 1 udp 681 yppasswdd
100004 2 udp 698 ypserv
100004 1 udp 698 ypserv
100004 2 tcp 701 ypserv
100004 1 tcp 701 ypserv
1. Configure /etc/exports
$ vi /etc/exports
/home *(rw,sync)
$ exportfs –a
2. $ service portmap restart
3. $ service nfs restart
Testing : Check if all is ok with
$ rpcinfo -p
$ showmount –e localhost
b.sadhiq
146 www.altnix.com
1. service portmap restart
2. showmount -e NFSServer
3. mount NFSserver:/home /home
4. Configure yp.conf
Do exactly the same as for server
5. service ypbind restart
or
use authconfig* which will do 4,5,6 automatically
You have to have the 'service yppasswdd' running on the server to do the foll:
$ ypchfn foo <-- Change finger info of foo in NIS pwd db map
$ yppasswd foo <-- Change foo's pwd in NIS pwd db map
i.e. /var/yp/altnix/passwd.byname passwd.byuid
Note : His local system login pwd changes accordingly bcos yppasswd* actually changes the passwd
of foo in /etc/passwd and then pushes the changes to 2 other file :
/var/yp/altnix/passwd.byname
/var/yp/altnix /passwd.byuid
$ ypmatch nisuser passwd
$ getent passwd nisuser
* on server
b.sadhiq
147 www.altnix.com
* This file is used to control hosts/users who can use your NIS server
syntax : host:domain:map:security
files: 30
This option specifies, how many database files should be cached by ypserv. If 0 is
specified, caching is disabled. Decreasing this number is only possible, if ypserv is restarted.
xfr_check_port: [<yes>|no]
With this option enabled, the NIS master server have to run on a port < 1024. The default is
"yes" (enabled).The field descriptions for the access rule lines are:
domain specifies the domain, for which this rule should be applied. An asterix as wildcard is
allowed.
port allow access if from port < 1024. Otherwise do not allow access.
eg 192.168.0.10:*:*:deny
eg *:*:passwd.byname:deny <-- All hosts/users will be denied access from any domain to the
foll map
****************
yppasswd -p nisuser
Changing NIS account information for nisuser on bigboy.my-site.com.
Please enter root password:
Changing NIS password for nisuser on bigboy.my-site.com.
Please enter new password:
Please retype new password:
NIS Troubleshooting
Troubleshooting is always required as any part of your daily routine, NIS is no exception. Here are
some simple steps to follow to get it working again.
1. The rpcinfo provides a list of TCP ports that your NIS client or server is using. Make sure you can
TELNET to these ports from the client to the server and vice versa. If this fails, make sure all the
correct NIS daemons are running and that there are no firewalls blocking traffic on the network or on
the servers themselves. These ports change from time to time, so memorizing them won't help much.
b.sadhiq
149 www.altnix.com
100004 1 tcp 614 ypserv
100007 2 udp 855 ypbind
100007 1 udp 855 ypbind
100007 2 tcp 858 ypbind
100007 1 tcp 858 ypbind
600100069 1 udp 874 fypxfrd
600100069 1 tcp 876 fypxfrd
[root@bigboy tmp]#
2. Always use the ypmatch, getent, and ypwhich commands to check your NIS connectivity. If there
is any failure, check your steps over again and you should be able to find the source of your problem.
3. Do not fail to create a user's home directory, set its permissions, and copy the /etc/skel files
correctly. If you forget, which is a common error, your users may have incorrect login prompts and
no ability to create files in their home directories.
It can never be overemphasized that one of the best places to start troubleshooting is by looking in
your error log files in the /var/log directory. You'll save a lot of time and effort if you always refer to
them whenever the problem doesn't appear to be obvious.
b.sadhiq
150 www.altnix.com
Installation of autofs.
Install autofs using rpm package.
[root@tenouk ~]# mount /dev/cdrom
[root@tenouk ~]# cd /mnt/cdrom/RedHat/RPMS
[root@tenouk ~]# rpm -Uhv autofs-3.1.7-28.i386.rpm
[root@tenouk ~]# cd /
[root@tenouk ~]# umont /dev/cdrom
Start, stop and restart autofs.
[root@tenouk ~]# /sbin/service autofs start
[root@tenouk ~]# /sbin/service autofs stop
[root@tenouk ~]# /sbin/service autofs restart
Setting of autofs automatic start.
[root@tenouk ~]# /sbin/chkconfig --level 35 autofs on
Confirmation of autofs automatic start
[root@tenouk ~]# /sbin/chkconfig --list autofs
Setting which uses NIS
The configuration on the NIS server
[root@tenouk ~]# vi /etc/auto.master
/nfs /etc/auto.home --timeout 60
[root@tenouk ~]# vi /etc/auto.home
home -rw,hard,intr,nolock compaq:/home
[root@tenouk ~]# vi /var/yp/Makefile
all: passwd group hosts rpc services netid protocols mail \
shadow auto.home \
# netgrp shadow publickey networks ethers bootparams printcap \
# amd.home auto.master auto.home auto.local passwd.adjunct \
# timezone locale netmasks
Allow normal user to mount linux partitions, usb stick / pen device
by nixcraft ·
You need to use autofs. It is use to mount file system on demand. Usually autofs is invoked at
b.sadhiq
151 www.altnix.com
system boot time with the start parameter and at shutdown time with the stop parameter. The autofs
script can also manually be invoked by the system administrator to shut down, restart or reload the
automounters.
autofs will consult a configuration file /etc/auto.master to find mount points on the system.
i) Install autofs if not installed. if you are using Debian / Ubuntu Linux, enter:
# apt-get install autofs
ii) Create dekstop group and add user jimmy to this group:
# groupadd desktop
# usermod -G video,desktop jimmy
# chmod -R a+rx /var/autofs/misc
iii) Configure autofs so that usb stick can be accessed:
# vi /etc/auto.misc
iv) Append following text to auto.misc:
usb -fstype=auto, user, sync, nodev, nosuid, gid=desktop, umask=002 :/dev/sda1
d -fstype=vfat, user, sync, nodev, nosuid,gid=desktop, umask=002 :/dev/hda2
Where,
usb : Is directory name, which can be accessed via /var/autofs/misc/usb directory. User in
desktop group just need to type cd command (cd /var/autofs/misc/usb) to change the directory.
-fstype- auto, user, sync, nodev, nosuid, gid-desktop, umask-002 :- All these are options
used to mount the file system by automounter.
auto: File system is automatically determined by kernel.
user: Normal user are allowed to mount devices
nodev: Do not interpret character or block special devices on the file system.
nosuid: Do not allow set-user-identifier or set-group-identifier bits to take effect. This is
security feature.
gid=desktop: This allows file system mounted as as group dekstop. As we have added user
jimmy to this group already.
umask=002: Setup umask so that users in group desktop can write data to device.
Please note that without gid and umask option normal user cannot write data to device.
v)Restart the autofs:
#/etc/init.d/autofs restart
vi) Test it as user jimmy (make sure usb stick/pen is inserted into usb port):
$ ls /var/autofs/misc/usb
$ cd /var/autofs/misc/usb
$ mkdir testdir
$ ls -l
b.sadhiq
152 www.altnix.com
NT and MacOS.
The DHCP server, client and relay agent are provided both as reference implementations of the
protocol and as working, fully-featured sample implementations.
Both the client and the server provide functionality that, while not strictly required by the protocol, is
very useful in practice. The DHCP server also makes allowances for non-compliant clients which one
might still like to support.
This tutorial describes how to set up a DHCP server (ISC-DHCP) for yourlocal network.
DHCP is short for "Dynamic Host Configuration Protocol",it's a protocol that handles the assignment
of IP addresses, subnet masks, default routers, and other IP parameters to client PCs that don't have a
static IP address. Such computers try to find a DHCP server in their local network which in turn
assigns them an IP address,gateway, etc. so that they can connect to the internet or other computers
from the local network.
current situation:
* network 192.168.10.0, subnetmask 255.255.255.0, broadcast address 192.168.10.255.
* gateway to the internet is 192.168.10.10; on the gateway there's no DHCP server.
* DNS servers I can use are 202.88.130.15 and 202.88.130.67
* I have a pool of 30 IP addresses (192.168.10.200 - 192.168.10.229) that can be dynamically
assigned to client PCs and that are not already in use.
IP address 192.168.10.10 which will act as DHCP server.
# You can also assign specific IP addresses based on the clients' ethernet MAC address
# as follows (Host's name is "fcfive.altnix.com"
host win2k3box.altnix.com {
next-server win2k3box.altnix.com # You could assign your own
hostname
hardware ethernet 12:34:56:78:AB:CD;
fixed-address 192.168.10.100;
}
}
- If you wanna peek into more stuff, Check the dhcp-options man page
NOTE:-
DHCP uses the BOOTP protocol for its communication between the client and server.
Make sure there are no firewalls blocking this traffic. DHCP servers expect requests on UDP port 67
and the DHCP clients expect responses on UDP port 68.
The DHCP server writes all current IP address "leases" to the file
/var/lib/dhcp3/dhcpd.leases so you should also find the lease there:
$cat /var/lib/dhcp3/dhcpd.leases
b.sadhiq
156 www.altnix.com
lease 192.168.10.229 {
starts 2 2006/09/19 14:01:31;
ends 3 2006/09/20 14:01:31;
binding state active;
next binding state free;
hardware ethernet 00:0c:76:8b:c4:16;
uid "\001\000\014v\213\304\026";
client-hostname "trinity";
}
What all can a DHCP server provide Clients ?
1. IP - range
2. netmask - option subnet-mask
3. BC -
4. nameserver - 'option domain-name-servers'
5. domain - 'option domain-name'
6. NIS domain - 'option nis-domain-name'
6. MAC addr-based IP - 'hardware ethernet' and 'fixed-address'
7. default lease time - 'default-lease-time'
8. max lease time - 'max-lease-time'
9 gateway - 'option routers'
For netbios/Samba option netbios-node-type 2
option netbios-name-server
Advantages of DHCP :
1. Easy configuration if many many clients
2. Saves IPs
2. fixed IP for certain clients
b.sadhiq
157 www.altnix.com
3. Automatic config of the above 9 points
Disadvantages :
Not even one
ddns-update-style interim;
ignore client-updates;
Selects point-to-point node (default is hybrid). Don't change this unless you understand Netbios very
well
DHCP Clients Obtaining 169.254.0.0 Addresses
Whenever Microsoft DHCP clients are unable to contact their DHCP server they default to selecting
their own IP address from the 169.254.0.0 network until the DHCP server becomes available again.
This is frequently referred to as Automatic Private IP Addressing (APIPA). Here are some steps you
can go through to resolve the problem:
Ensure that your DHCP server is configured correctly and use the pgrep command discussed
earlier to make sure the DHCP process is running. Pay special attention to your 255.255.255.255
route, especially if your DHCP server has multiple interfaces.
Give your DHCP client a static IP address from the same range that the DHCP server is
supposed to provide. See whether you can ping the DHCP server. If you cannot, double-check your
cabling and your NIC cards.
DHCP uses the BOOTP protocol for its communication between the client and server. Make
sure there are no firewalls blocking this traffic. DHCP servers expect requests on UDP port 67 and
the DHCP clients expect responses on UDP port 68. Use tcpdump on the server's NIC to verify the
correct traffic flows.
Other DHCP Failures
If the DHCP server fails to start then use your regular troubleshooting techniques outlined in Chapter
4, "Simple Network Troubleshooting", to help rectify your problems. Most problems with an initial
setup are often due to:
Incorrect settings in the /etc/dhcpd.conf file such as not defining the networks for which the
DHCP server is responsible;
Firewall rules that block the DHCP bootp protocol on UDP ports 67 and 68;
Routers failing to forward the bootp packets to the DHCP server when the clients reside on a
separate network.
Always check your /var/logs/messages file for dhcpd errors and remember that mandatory keywords
in your configuration file may change when you upgrade your operating system. Always read the
release notes to be sure.
When a client is to be booted, its boot parameters are determined by consulting that client's host
declaration (if any), and then consulting any class declarations matching the client, followed by the
b.sadhiq
158 www.altnix.com
pool, subnet and shared-network declarations for the IP address assigned to the client. Each of these
declarations itself appears within a lexical scope, and all declarations at less specific lexical scopes
are also consulted for client option declarations. Scopes are never considered twice, and if
parameters are declared in more than one scope, the parameter declared in the most specific scope is
the one that is used.
When dhcpd tries to find a host declaration for a client, it first looks for a host declaration which has
a fixed-address declaration that lists an IP address that is valid for the subnet or shared network on
which the client is booting. If it doesn't find any such entry, it tries to find an entry which has no
fixed-address declaration.
EXAMPLES
A typical dhcpd.conf file will look something like this:
global parameters...
subnet 204.254.239.0 netmask 255.255.255.224 {
subnet-specific parameters...
range 204.254.239.10 204.254.239.30;
}
subnet 204.254.239.32 netmask 255.255.255.224 {
subnet-specific parameters...
range 204.254.239.42 204.254.239.62;
}
subnet 204.254.239.64 netmask 255.255.255.224 {
subnet-specific parameters...
range 204.254.239.74 204.254.239.94;
}
group {
group-specific parameters...
host zappo.test.isc.org {
host-specific parameters...
}
host beppo.test.isc.org {
host-specific parameters...
}
b.sadhiq
159 www.altnix.com
host harpo.test.isc.org {
host-specific parameters...
}
}
Notice that at the beginning of the file, there's a place for global parameters. These might be things
like the organization's domain name, the addresses of the name servers (if they are common to the
entire organization), and so on. So, for example:
option domain-name "isc.org";
option domain-name-servers ns1.isc.org, ns2.isc.org;
As you can see in Figure 2, you can specify host addresses in parameters using their domain names
rather than their numeric IP addresses. If a given hostname resolves to more than one IP address (for
example, if that host has two ethernet interfaces), then where possible, both addresses are supplied to
the client.
The most obvious reason for having subnet-specific parameters as shown in Figure 1 is that each
subnet, of necessity, has its own router. So for the first subnet, for example, there should be
something like:
option routers 204.254.239.1;
Note that the address here is specified numerically. This is not required - if you have a different
domain name for each interface on your router, it's perfectly legitimate to use the domain name for
that interface instead of the numeric address. However, in many cases there may be only one domain
name for all of a router's IP addresses, and it would not be appropriate to use that name here.
In Figure 1 there is also a group statement, which provides common parameters for a set of three
hosts - zappo, beppo and harpo. As you can see, these hosts are all in the test.isc.org domain, so it
might make sense for a group-specific parameter to override the domain name supplied to these
hosts:
option domain-name "test.isc.org";
Also, given the domain they're in, these are probably test machines. If we wanted to test the DHCP
leasing mechanism, we might set the lease timeout somewhat shorter than the default:
max-lease-time 120; default-lease-time 120;
b.sadhiq
160 www.altnix.com
TcpWrappers
Almost all BSD / UNIX / Linux like operating systems are compiled with TCP Wrappers
support. For e.g. Solaris 9, various Linux / *BSD distributions, and Mac OS X have TCP
Wrappers configured to run out-of-the-box. It is a library which provides simple access control
and standardized logging for supported applications which accept connections over a network.
TCP Wrapper is a host-based Networking ACL system, used to filter network access to
Internet. TCP wrappers was original written to monitor and stop cracking activities on the
UNIX workstation in 90s. It was best solution in 90s to protect the UNIX workstations over the
Internet. However it has few disadvantages:
1. All UNIX apps must be compiled with the libwrap library.
2. The wrappers do not work with RPC services over TCP.
3. The user name lookup feature of TCP Wrappers uses identd to identify the username of the
remote host. By default, this feature is disabled, as identd may appear hung when there are large
number of TCP connections.
However, it has one strong advantage over firewall. It works on the application layer. It can filter
requests when encryption is used. Basically, you need to use both host based and network based
security. Common services such as pop3, ftp, sshd, telnet, r-services are supported by TCP
Wrappers.
TCPD Benefits
1. Logging - Connections that are monitored by tcpd are reported through the syslog facility.
2. Access Control - tcpd supports a simple form of access control that is based on pattern
matching. You can evern hook the execution of shell commands / script when a pattern matches.
3. Host Name Verification - tcpd verifies the client host name that is returned by the address-
>name DNS server by looking at the host name and address that are returned by the name->address
DNS server.
4. Spoofing Protection
To determine whether a given executable daemon /path/to/daemon supports TCP Wrapper, check the
man page, or ennter:
$ ldd /path/to/daemon | grep libwrap.so
If this command returns any output, then the daemon probably supports TCP Wrapper. In this
example, find out of if sshd supports tcp wrappers on not, enter:
$ whereis sshd
Sample Output:
sshd: /usr/sbin/sshd /usr/share/man/man8/sshd.8.gz
b.sadhiq
161 www.altnix.com
libwrap.so.0 => /lib64/libwrap.so.0 (0x00002b759b381000)
ldd is used to see if libwrap.so is a dependency or not. An alternative to TCP Wrapper support is
packet filtering using iptables.
Important Files
Where,
daemon_list - a list of one or more daemon process names.
client_list - a list of one or more host names, host addresses, patterns or wildcards that will be
matched against the client host name or address.
WildCards
The access control language supports explicit wildcards (quoting from the man page):
LOCAL Matches any host whose name does not contain a dot character.
UNKNOWN
Matches any user whose name is unknown, and matches any host
whose name or address are unknown. This pattern should be used
with care: host names may be unavailable due to temporary name
server problems. A network address will be unavailable when the
software cannot figure out what type of network it is talking
to.
KNOWN Matches any user whose name is known, and matches any host whose
name and address are known. This pattern should be used with
care: host names may be unavailable due to temporary name server
b.sadhiq
162 www.altnix.com
problems. A network address will be unavailable when the soft-
ware cannot figure out what type of network it is talking to.
PARANOID
Matches any host whose name does not match its address. When
tcpd is built with -DPARANOID (default mode), it drops requests
from such clients even before looking at the access control
tables. Build without -DPARANOID when you want more control
over such requests.
Above will denies all service to all hosts, unless they are permitted access by entries in the allow file.
For example, allow access as follows via /etc/hosts.allow:
ALL: LOCAL @devels
ALL: .nixcraft.net.in EXCEPT boobytrap.nixcraft.net.in
Log and deny access (booby traps) - we do not allow connections from crackers.com:
ALL : .crackers.com \
: spawn (/bin/echo %a from %h attempted to access %d >> \
/var/log/connections.log) \
: deny
b.sadhiq
163 www.altnix.com
Reject All Connections
Restrict all connections to non-public services to localhost only. Suppose sshd and ftpd are the
names of service which must be accessed remotely. Edit /etc/hosts.allow. Add the following lines:
sshd ,ftpd : ALL
ALL: localhost
Save and close the file. Edit /etc/hosts.deny. Add the following line:
ALL: ALL
How Do I Predicts How The Tcp Wrapper Would Handle a Specific Request For Service?
Use tcpdmatch command. predict how tcpd would handle a sshd request from the local system:
tcpdmatch sshd localhost
The same request, pretending that hostname lookup failed:
tcpdmatch sshd 192.168.1.5
To predict what tcpd would do when the client name does not match the client address:
tcpdmatch sshd paranoid
Replace sshd with in.telnetd, or ftpd and so on. You can use all daemon names specified in
inetd.conf or xinetd.conf file.
You need to use both (firewall and tcpd) to fight against crackers.
TCP Wrappers are most commonly employed to match against IP addresses and host level
protection.
b.sadhiq
164 www.altnix.com
Never configure TCP Wrappers on firewall host.
Put TCP Wrappers on all UNIX / Linux / BSD workstations.
Do not use NIS (YP) netgroups in TCP Wrappers rules.
Put TCP Wrappers behind a firewall systems as TCP Wrappers is no substitute for netfilter or
pf firewall.
TCP Wrappers does provide increased security as firewall cannot examine encrypted
connections (read as packets).
Xinetd
xinetd, the eXtended InterNET Daemon, is an open-source daemon which runs on many Linux
and Unix systems and manages Internet-based connectivity. It offers a more secure extension
to or version of inetd, the Internet daemon.
xinetd performs the same function as inetd: it starts programs that provide Internet services.
Instead of having such servers started at system initialization time, and be dormant until a
connection request arrives, xinetd is he only daemon process started and it listens on all service
ports for the services listed in its configuration file. When a request comes in, xinetd starts the
appropriate server. Because of the way it operates, xinetd (as well as inetd) is also referred to
as a super-server.
defaults
{
b.sadhiq
165 www.altnix.com
instances = 60
log_type = SYSLOG authpriv
log_on_success = HOST PID
log_on_failure = HOST
cps = 25 30
}
includedir /etc/xinetd.d
Where,
instances = 60 : Determines the number of servers that can be simultaneously active for a
service. So 60 is the maximum number of requests xinetd can handle at once.
log_type = SYSLOG authpriv: Determines where the service log output is sent. You can send
it to SYSLOG at the specified facility (authpriv will send log to /var/log/secure file).
log_on_success = HOST PID: Force xinetd to log if the connection is successful. It will log
HOST name and Process ID to /var/log/secure file.
cps = 25 30: Limits the rate of incoming connections. Takes two arguments. The first
argument is the number of connections per second to handle. If the rate of incoming connections is
higher than this, the service will be temporarily disabled. The second argument is the number of
seconds to wait efore re-enabling the service after it has been disabled. The default for this setting is
50 incoming connections and the interval is 10 seconds. This is good to avoid DOS attack against
your service.
includedir /etc/xinetd.d: Read other service specific configuration file this directory.
b.sadhiq
167 www.altnix.com
SAMBA
b.sadhiq
168 www.altnix.com
Windows
$ net view
$ net view IP
$ net use * \\<ip>\funny
$ net use * /d
$ nbtstat -a IP
$ rpm -ql samba
/etc/logrotate.d/samba - Log Rotation File
--> Log Files: /var/log/samba
/etc/pam.d/samba
Samba relies heavily upon authentication before providing access to the users such as file and
printers.Its integrated with Samba Authentication, Accounting as well as session management is
handled via the samba entries in pam.d
/etc/samba/smbusers
Provides a mapping between windows users and local linux based users Although samba can
integrate in Windows World before providing access to local resources such as files on the file
system, users must be authenticated by the local linux system. Default smbusers file contains
mapping for local linux administrator which is root and root's equivalent user in windows world is
administrator. However some call it as admin. If either of two users try to connect to share they
should be equated to the local unix user root providing they know the password. The password
should be same as of administrator and root user,otherwise it would prompt for the password, when
they connect to the samba share.
If local windows users logs in through guest, pcguest and smbguest they are equated with "nobody"
user on linux system
/etc/sysconfig/samba
Which specifies the parameter to run smbd and nmbd as a deamon
/var/spool/samba
If remote users attempt to spool print job to our samba server,they are generally spooled to this
directory Samba nicely integrates with CUPS (defaut modular Printing System)
SMB services are provided by the NetBIOS protocol. NetBIOS makes its own name space available,
which is completely different from the domain name system.
This name space can be accessed with the Unique Naming Convention (UNC) notation: all services
provided by a server are addressed as \\Server\Servicename.
File or print services offered by a server are also called shares.
b.sadhiq
169 www.altnix.com
The server side of Samba consists of 2 parts:
- smbd. SMB/CIFS server
This daemon provides file and print services for clients in the network.
resource browsing
WINS server
represents the files that are included into your system as Samba Server
samba-client-3.0.10-1.4E.2
b.sadhiq
171 www.altnix.com
default.)
[profile] A share for storing domain logon information such as
"favorites" and desktop icons. (Not created by default.)
b.sadhiq
172 www.altnix.com
Samba password database located
on the samba server itself.
smb passwd file / It is useful to specify the name
etc/samba/smbpass and location of the Samba
wd password file. This helps to make
Samba version upgrades where
the default locations may change.
private dir /etc/samba Specifies default directory for
some supporting temporary files.
As with the password file, it is a
good practice to specify this
value.
Security Levels
- user (user, server, ADS, domain)
- user
In user mode we can authenticate against local unix /etc/passwd file
- server
We pass authentication off
Server mode simply passes off authentication to the password authentication
server such as NT4 or WIN2K domain controller
- Domain
This is used to join NT4 style domain. You need a computer account in NT4 style domain
- ADS
Same as Domain but just different behaviour
IF ADS is in native mode, they you will need to join the domain
And you need to know kerberos Realm, so that we can accept kerberos tickets
- share
This mode where you tie a password to share, and if a user knows the password they granted as
readonly-readwrite share. This will be implemented where the everyone knows the password
Creating User
$ useradd champu
$ passwd champu
Assigning smb password to user champu
$ smbpasswd -a champu
editing configuration file
$ vi /etc/samba/smb.conf
b.sadhiq
173 www.altnix.com
[myshare]
comment = Windoze champu
path = /home/champu
valid users = champu
public = yes
writable = yes
printable = no
create mask = 0765
- public
This parameter is a synonym for guest ok.
$ service smb restart
gathering information:
$ smbclient -L 192.168.10.66 -U champu
The -L option should be used to determine if the Samba server is even
running andlistening for network requests.
Accessing Samba Server
$ smbclient //192.168.10.66/myshare -U champu
smb: \> ls
smb: \> mkdir docs
smb: \> ls
smb: \> quit
Acessing Samba Server From Windows
Acessing Samba Server From X Windows
Open Nautilus
in Location section
smb://192.168.10.66
b.sadhiq
174 www.altnix.com
$ df -h
$ cd /bill
Usage: smbmount
$ smbmount //192.168.10.60/myshare /bill -o username=champu,password=x
$ cd /bill
Samba Pdc
We will setup 1 domain "mydomain1" on a linux machine with samba.
b.sadhiq
176 www.altnix.com
1. Create a samba config files in /etc/samba/ and copy paste the content in 2nd step.
a. smb.conf
[global]
workgroup = mydomain1
netbios name =server1
time server = Yes
domain logons = Yes
os level = 65
preferred master = Yes
domain master = Yes
encrypt passwords = yes
smb passwd file = /etc/samba/smbpasswd
security = user
mangling method = hash
add machine script = /usr/sbin/useradd -d /dev/null -g trust -s /bin/false -M %u
log file = /var/log/samba/log.%m
log level = 3 passdb:5 auth:10 winbind:2
logon path = \\%L\profiles\%U
logon drive = H:
logon home = \\%L\%U\.profile
logon script = logon.cmd
interfaces = 192.168.2.249/24 ##put your samba server IP address
bind interfaces only = yes
lock directory = /var/lib/samba/locks/server1
[homes]
read only = No
browseable = Yes
create mask = 0644
directory mask = 0755
[netlogon]
path=/var/lib/samba/netlogon
guest ok = yes
[profiles]
path=/var/lib/samba/profiles
browseable = yes
read only = No
create mask = 0600
directory mask = 0700
root preexec = PROFILE=/var/lib/samba/profiles/%u; if [ ! -e $PROFILE ]; \
then mkdir -pm700 $PROFILE; chown %u:%g $PROFILE;fi
b.sadhiq
177 www.altnix.com
3. Then create below directory:
/var/lib/samba/locks/server1
SAMBA PDC
make dir: /var/lib/samba/locks/server
smb.conf
[global]
workgroup = MYDOMAIN
netbios name =server
# time server = Yes
b.sadhiq
178 www.altnix.com
domain logons = Yes
os level = 33
preferred master = Yes
# local master = Yes
local master = no
domain master = Yes
encrypt passwords = yes
smb passwd file = /etc/samba/smbpasswd
security = user
passdb backend = tdbsam
#when u use passdb backend = tdbsam ;when samba user is created, it stores the user name and
passwd in passwd.tdb file rather than smbpasswd file
mangling method = hash
add machine script = /usr/sbin/useradd -d /dev/null -g trust -s /bin/false -M %u
[homes]
read only = No
browseable = Yes
create mask = 0644
directory mask = 0755
valid users = %S
valid users = MYDOMAIN\%S
##
[netlogon]
path=/var/lib/samba/netlogon
guest ok = yes
[profiles]
b.sadhiq
179 www.altnix.com
path=/var/lib/samba/profiles
browseable = yes
read only = No
create mask = 0600
directory mask = 0700
root preexec = PROFILE = /var/lib/samba/profiles/%u; if [ !-e $PROFILE ]; then mkdir
-pm700 $PROFILE; chown %u:%g $PROFILE;fi
1>.smb.conf
[global]
netbios name = station
workgroup = MYDOMAIN
security = domain
password server = 192.168.1.2
# realm = AVTECH.LOCAL
encrypt passwords = yes
idmap uid = 16777216-33554431
idmap gid = 16777216-33554431
#idmap backend = ad
template homedir = /home/%D/%U
template shell = /bin/bash
winbind use default domain = true
~
~
2>.system-config-authentication
only check the winbind setting and specify the domain name n other req parameters
it wil do the req. winbind changes in the following files:
a>/etc/nsswitch
b>/etc/pam.d/system-auth
/etc/pam.d/system-auth
session required pam_mkhomedir.so skel=/etc/skel umask=0077
insert this line at the bottom of the system-auth file just before
session required pam_unix.so
3.>Try to join the domain using foll
net rpc join domain-name -U root
b.sadhiq
180 www.altnix.com
FIREWALL / IPTABLES
What's a firewall?
Without getting into technical explanations, a firewall is simply a host whose main purpose is to
protect your network. A firewall restricts certain types of network traffic from the Internet to your
protected network(s) - the reverse is also often true.
Firewalls have always referred to as Layer 3 Security because from many years. Firewalls were only
be able to understand information as it passed over the network layer 3. Now the technology has
b.sadhiq
181 www.altnix.com
moved forward, Firewalls have become more flexible, more intelligent ultimately earning the name
such as Layer 7 firewall or application level firewalls.
So the term firewall is not necessarily as accurate, Today as it is always been because there are
various levels of firewalls.
Lets peek into concepts of firewall, how they might apply to security practices in linux environment
First we gonna look at IPTABLES, defacto standard for securing linux at the firewall level. So it
pretty much comes bundled with everything across the board and it is pretty easy to configure and
offers a lot of flexibility.
Types of exploits
Local - There is no security without physical security. If someone has physical access to your
box, you've lost. Obviously, a firewall won't help you here.
Local privilege escalation - The trojan horse attack. The attacker already has a local account on
your box (inside the gates) and obtains root by some means (vulnerability or misconfiguration). A
firewall cannot protect again this type of attacks.
Remote - Your host is listening on a port that the attacker is able to connect to remotely over a
network and exploit a vulnerability somehow. This is the only type of attack a firewall can
(hopefully) protect you against. There is another important point here that most firewall howtos
neglect. In order for someone to exploit your box remotely, it has to be listening on some ports (i.e.
providing a way for an attacker to connect). Therefore, if your host isn't listening on any ports, you
are safe from remote exploits (unless the attacker manages to attack the network stack itself).
IPTABLES
replaces older IPchains firewall in linux
available since 2.4 kernel
Allows configuration of built-in firewall rules for host-based protection
IPtables can be used for routing, forwarding, filtering
ipfwadm for linux kernel 2.0
ipchains for linux kernel 2.2
iptables for linux kernel 2.4/2.6
What is Netfilter/Iptables?
Netfilter is the framework in Linux kernels that allow for firewalling, NAT, and packet mangling.
Iptables is the userspace tools that works with the Netfilter framework (technically a lie; Iptables is
also a part of the Netfilter framework in the kernel). Think of Netfilter as kernel space, and Iptables as
userspace.
- Iptables is merely user space tool provides the administrator, means of configuring the core netfilter
services that actually part of the kernel
b.sadhiq
183 www.altnix.com
A stateful firewall (any firewall that performs stateful packet inspection (SPI) or stateful inspection) is
a firewall that keeps track of the state of network connections (such as TCP streams, UDP
communication) travelling across it. The firewall is programmed to distinguish legitimate packets for
different types of connections. Only packets matching a known connection state will be allowed by
the firewall; others will be rejected.
Before the advent of stateful firewalls, a stateless firewall, a firewall that treats each network frame
(or packet) in isolation, was normal. Such a firewall has no way of knowing if any given packet is part
of an existing connection, is trying to establish a new connection, or is just a rogue packet. Modern
firewalls are connection-aware (or state-aware), affording network administrators finer-grained
control of network traffic.
- Can filter out malformed packets based upon TCP Flags set in
packets
So we know that particular machine will never be seeing a christmas tree packet where pretty much
everything is turned on, so we can set out filters to protect against these type of attacks
Post-routing chain: NATs packets when the source address of the packet needs to be changed
Packet
Queue Queue
Transformation Chain Function
Type Function
Chain in Queue
Filter Packet filtering FORWARD Filters packets to servers accessible by another NIC on the
firewall.
INPUT Filters packets destined to the firewall.
b.sadhiq
185 www.altnix.com
You need to specify the table and the chain for each firewall rule you create. There is an exception:
Most rules are related to filtering, so iptables assumes that any chain that's defined without an
associated table will be a part of the filter table. The filter table is therefore the default.
To help make this clearer, take a look at the way packets are handled by iptables. In Figure 14.1 a
TCP packet from the Internet arrives at the firewall's interface on Network A to create a data
connection.
The packet is first examined by your rules in the mangle table's PREROUTING chain,
if any. It is then inspected by the rules in the nat table's PREROUTING chain to see
whether the packet requires DNAT. It is then routed.
If the packet is destined for a protected network, then it is filtered by the rules in the
FORWARD chain of the filter table and, if necessary, the packet undergoes SNAT in
the POSTROUTING chain before arriving at Network B. When the destination server
decides to reply, the packet undergoes the same sequence of steps. Both the
FORWARD and POSTROUTING chains may be configured to implement quality of
service (QoS) features in their mangle tables, but this is not usually done in SOHO
environments.
If the packet is destined for the firewall itself, then it passes through the mangle table
of the INPUT chain, if configured, before being filtered by the rules in the INPUT
chain of the filter table before. If it successfully passes these tests then it is processed
by the intended application on the firewall.
At some point, the firewall needs to reply. This reply is routed and inspected by the
rules in the OUTPUT chain of the mangle table, if any. Next, the rules in the
OUTPUT chain of the nat table determine whether DNAT is required and the rules in
the OUTPUT chain of the filter table are then inspected to help restrict unauthorized
packets. Finally, before the packet is sent back to the Internet, SNAT and QoS
mangling is done by the POSTROUTING chain
b.sadhiq
186 www.altnix.com
Targets And Jumps
Each firewall rule inspects each IP packet and then tries to identify it as the target of some sort of
operation. Once a target is identified, the packet needs to jump over to it for further processing.
Table 14.2 lists the built-in targets that iptables uses.
b.sadhiq
187 www.altnix.com
processing.
The packet is handed
over to the end application or
the operating system for
processing
DROP iptables stops further N/A
processing.
The packet is blocked
LOG The packet --log-prefix "string"
information is sent to the Tells iptables to prefix all log
syslog daemon for logging messages with a user defined string.
iptables continues Frequently used to tell why the
processing with the next rule logged packet was dropped
in the table
As you can't log and
drop at the same time, it is
common to have two similar
rules in sequence. The first
will log the packet, the
second will drop it.
REJECT Works like the DROP --reject-with qualifier
target, but will also return an The qualifier tells
error message to the host what type of reject
sending the packet that the message is returned.
packet was blocked Qualifiers include:
icmp-port-
unreachable (default)
icmp-net-unreachable
icmp-host-
unreachable
icmp-proto-
unreachable
icmp-net-prohibited
icmp-host-prohibited
tcp-reset
echo-reply
DNAT Used to do --to-destination
destination network ipaddress
address translation. ie. Tells iptables what
rewriting the destination IP the destination IP
address of the packet address should be
SNAT Used to do source --to-source
network address <address>[-
translation rewriting the <address>][:<port>-
source IP address of the <port>]
b.sadhiq
188 www.altnix.com
packet Specifies the source
The source IP address IP address and ports
is user defined to be used by SNAT.
MASQUERAD Used to do Source [--to-ports <port>[-
E Network Address <port>]]
Translation. Specifies the range of
By default the source source ports to which
IP address is the same as that the original source
used by the firewall's port can be mapped.
interface
$ /sbin/iptables -p icmp -h
$ iptables -t filter -P INPUT DROP
$ iptables -t filter -P OUTPUT ACCEPT
$ iptables -t filter -P FORWARD ACCEPT
b.sadhiq
192 www.altnix.com
$ iptables -t filter -A INPUT -m state --state INVALID -j DROP
$ iptables -t filter -A OUTPUT -m state --state INVALID -j DROP
$ iptables -t filter -A FORWARD -m state --state INVALID -j DROP
bittorrent :
$ iptables -t filter -A INPUT -p tcp -m tcp --dport 6881:6889 -j ACCEPT
Tables
$ IPTABLES -N tcp_packets
$ IPTABLES -N icmp_packets
$ IPTABLES -N udpincoming_packets
IP Masquerade
$ IPTABLES -t nat -A POSTROUTING -o $INET_IFACE -j MASQUERADE
b.sadhiq
194 www.altnix.com
Squid transparent proxy
$ IPTABLES -A PREROUTING -t nat -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 3128
$ IPTABLES -A PREROUTING -t nat -i eth2 -p tcp --dport 80 -j REDIRECT --to-port 3128
$ IPTABLES -A PREROUTING -t nat -i ppp0 -p tcp --dport 80 -j REDIRECT --to-port 3128
smtp
$IPTABLES -A tcp_packets -p TCP -s 0/0 --dport 25 -j ACCEPT
www
$IPTABLES -A tcp_packets -p TCP -s 0/0 --dport 80 -j ACCEPT
https
$IPTABLES -A tcp_packets -p TCP -s 0/0 --dport 443 -j ACCEPT
mail
$ IPTABLES -A tcp_packets -p TCP -s 0/0 --dport 465 -j ACCEPT
$ IPTABLES -A tcp_packets -p TCP -s 0/0 --dport 993 -j ACCEPT
$ IPTABLES -A tcp_packets -p TCP -s 0/0 --dport 995 -j ACCEPT
wlan vpn
$ IPTABLES -A wlan_packets -p UDP -s 0/0 --dport 5000 -j ACCEPT
$ IPTABLES -A wlan_packets -p ALL -j DROP
iptables blocking with mac address
Drop all connection coming from mac address 00:0F:EA:91:04:08 (add command to your firewall
script)
$ iptables -A INPUT -m mac --mac-source 00:0F:EA:91:04:08 -j DROP
iptables allowing with mac address
Allow port 22 for mac address 00:0F:EA:91:04:07
$ iptables -A INPUT -p tcp --destination-port 22 -m mac --mac-source
00:0F:EA:91:04:07 -j ACCEPT
General syntax:
$ iptables RULE -m time --timestart TIME --timestop TIME --days DAYS -j ACTION
Where,
--timestart TIME : Time start value . Format is 00:00-23:59 (24 hours format)
--timestop TIME : Time stop value.
--days DAYS : Match only if today is one of the given days. (format:
Mon,Tue,Wed,Thu,Fri,Sat,Sun ; default everyday)
b.sadhiq
195 www.altnix.com
An example
Suppose you would like to allow incoming ssh access only available from Monday to Friday
between 9 AM to 6. Then you need to use iptables as follows:
Input rule:
$ iptables -A INPUT -p tcp -s 0/0 --sport 513:65535 -d 202.54.1.20 --dport 22 -m state --state
NEW,ESTABLISHED -m time --timestart 09:00 --timestop 18:00 --days Mon,Tue,Wed,Thu,Fri -j
ACCEPT
Output rule:
$ iptables -A OUTPUT -p tcp -s 202.54.1.20 --sport 22 -d 0/0 --dport 513:655 -m state --state
ESTABLISHED -m time --timestart 09:00 --timestop 18:00 --days Mon,Tue,Wed,Thu,Fri -j
ACCEPT
Force SYN packets check
Make sure NEW incoming tcp connections are SYN packets; otherwise we need to drop them:
$ iptables -A INPUT -p tcp ! --syn -m state --state NEW -j DROP
Force Fragments packets check
Packets with incoming fragments drop them. This attack result into Linux server panic such data
loss.
$ iptables -A INPUT -f -j DROP
XMAS packets
Incoming malformed XMAS packets drop them:
$ iptables -A INPUT -p tcp --tcp-flags ALL ALL -j DROP
Drop all NULL packets
Incoming malformed NULL packets:
$ iptables -A INPIT -p tcp --tcp-flags ALL NONE -j DROP
Protect against SYN floods by rate limiting the number of new connections from any host to 60 per
second. This does *not* do rate limiting overall, because then someone could easily shut us down by
saturating the limit.
$ iptables -A INPUT -m state --state NEW -p tcp -m tcp --syn \
-m recent --name synflood --set
$ iptables -A INPUT -m state --state NEW -p tcp -m tcp --syn \
-m recent --name synflood --update --seconds 1 --hitcount 60 -j DROP
The same can be achieved in ipfw using the dummynet shaper:
Direct SYN
ipfw pipe 500 config bw 64Kbit/s queue 5
b.sadhiq
196 www.altnix.com
ipfw add 500 pipe 500 tcp from any to any in setup
Port scanning
A lot of hosts try to port scan my server these days, looking for open services they can try to exploit.
Since I run very few services on my server, what I like to do is look for port connections to a
commonly scanned port (port 139, for Windows File Sharing), and then block the hosts who attempt
the connection from talking to my server for an entire day. The rule is quite simple using the iptables
recent module:
Anyone who tried to portscan us is locked out for an entire day.
$ iptables -A INPUT -m recent --name portscan --rcheck --seconds 86400 -j DROP
$ iptables -A FORWARD -m recent --name portscan --rcheck --seconds 86400 -j DROP
Once the day has passed, remove them from the portscan list
$ iptables -A INPUT -m recent --name portscan --remove
$ iptables -A FORWARD -m recent --name portscan --remove
CLOSE INCOMING TCP
$ IPTABLES -A tcp_packets -m state --state ESTABLISHED,RELATED -j ACCEPT
$ IPTABLES -A tcp_packets -p TCP -s 0/0 -j DROP
CLOSE INCOMING UDP
$ IPTABLES -A udpincoming_packets -m state --state ESTABLISHED,RELATED -j ACCEPT
$ IPTABLES -A udpincoming_packets -p UDP -j DROP
b.sadhiq
197 www.altnix.com
DNS
b.sadhiq
198 www.altnix.com
DNS Basics
Finding a single server out of all of the servers on the Internet is like trying to find a single file on
drive with thousands of files. In both cases it helps to have some hierarchy built into the directory to
logically group things. The DNS "namespace" is hierarchical in the same type of upside-down tree
structure seen with file systems. Just as you have the root of a partition or drive, the DNS namespace
has a root which is signified by a period.
When specifying the absolute path to a file in a file system you start at the root and go to the file:
/etc/bind/named.conf
When specifying the absolute path to a server in the DNS namespace you start at the server and go to
the root:
www.aboutdebian.com.
Note that period after the 'com' as it's important. It's how you specify the root of the namespace. An
absolute path in the DNS namespace is called a FQDN (Fully Qualified Domain Name). The use of
FQDNs are prevalent in DNS configuration files and it's important that you always use that trailing
period.
Internet resources are usually specified by a domain name and a server hostname. The www part of a
URL is often the hostname of the Web server (or it could be an alias to a server with a different host
name). DNS is basically just a database with records for these hostnames. The directory for the entire
telephone system is not stored in one huge phone book. Rather, it is broken up into many pieces with
each city having, and maintaining, its piece of the entire directory in its phone book. By the same
token, pieces of the DNS directory database (the "zones") are stored, and maintained, on many
different DNS servers located around the Internet. If you want to find the telephone number for a
person in Poughkeepsie, you'd have to look in the Poughkeepsie telephone book. If you want to find
the IP address of the www server in the some-domain.com domain, you'd have to query the DNS
server that stores the DNS records for that domain.
The entries in the database map a host/domain name to an IP address. Here is a simplistic logical
view of the type of information that is stored (we'll get to the A, CNAME, and MX designations in a
bit).
A www.their-domain.com 172.29.183.103
b.sadhiq
199 www.altnix.com
MX mail.their-domain.com 172.29.183.217
A debian.your-domain.com 10.177.8.3
MX debian.your-domain.com 10.177.8.3
This is why a real Internet server needs a static (unchanging) IP address. The IP address of the
server's NIC connected to the Internet has to match whatever address is in the DNS database.
Dynamic DNS does provide a way around this for home servers however, which we'll see later.
When you want to browse to www.their-domain.com your DNS server (the one you specify in the
TCP/IP configuration on your desktop computer) most likely won't have a DNS record for the their-
domain.com domain so it has to contact the DNS server that does. When your DNS server contacts
the DNS server that has the DNS records (referred to as "resource records" or "zone records") for
their-domain.com your DNS server gets the IP address of the www server and relays that address
back to your desktop computer. So which DNS server has the DNS records for a particular domain?
When you register a domain name with someone like Network Solutions, one of the things they ask
you for are the server names and addresses of two or three "name servers" (DNS servers). These are
the servers where the DNS records for your domain will be stored (and queried by the DNS servers
of those browsing to your site). So where do you get the "name servers" information for your
domain? Typically, when you host your Web site using a Web hosting service they not only provide
a Web server for your domain's Web site files but they will also provide a DNS server to store your
domain's DNS records. In other words, you'll want to know who your Web hosting provider is going
to be before you register a domain name (so you can enter the provider's DNS server information in
the name servers section of the domain name registration application).
You'll see the term "zone" used in DNS references. Most of the time a zone just equates to a
domain. The only times this wouldn't be true is if you set up subdomains and set up separate
DNS servers to handle just those subdomains. For example, a company would set up the
subdomains us.their-domain.com and europe.their-domain.com and would "delegate" a separate
DNS server to each one of them. In the case of these two DNS servers their zone would be just
the subdomains. The zone of the DNS server for the parent their-domain.com (which would
contain the servers www.their-domain.com and mail.their-domain.com) would only contain
records for those few machines in the parent domain.
Note that in the above example "us" and "europe" are subdomains while "www" and "mail" are
host names of servers in the parent domain.
Once you've got your Web site up and running on your Web hosting provider's servers and someone
surf's to your site, the DNS server they specified in their local TCP/IP configuration will query your
hosting provider's DNS servers to get the IP address for your Web site. The DNS servers that host
the DNS records for your domain, i.e. the DNS servers you specify in your domain name registration
application, are the authoritative DNS servers for your domain. The surfer's DNS server queries one
of your site's authoritative DNS servers to get an address and gets an authoritative response. When
the surfer's DNS server relays the address information back to the surfer's local PC it is a "non-
b.sadhiq
200 www.altnix.com
authoritaive" response because the surfer's DNS server is not an authoritative DNS server for your
domain.
Domains and Delegation
The Domain Name System uses a tree (or hierarchical) name structure. At the top of the tree is the
root node followed by the Top-Level Domains (TLDs), then the Second-Level Domains (SLD) and
any number of lower levels, each separated with a dot.
s Note The root of the tree is represented most of the time as a silent dot ( . ), but there are times
when it
is VERY important.
DNS name resolution is nothing but resolving host names, such as www.nixcraft.com, to their
corresponding IP addresses. DNS works as the "phone book" for the Internet by translating hostname
into IP address or vise versa. Most DNS server stores following information:
a) Hostname and their IP address
b) List of mail server and their IP address for given domain name
c) Anti spam configuration and much more.
Without DNS name resolution, nothing will work on the Internet. Nobody likes to remember IP
address, so DNS is foundation of many Internet services such as web, proxy, email and so on.
When you type www.yahoo.com into a web browser, the application has to find out IP address
associated with www.yahoo.com. Each part of network has DNS server or name servers. Each
application send a request called dns lookup to DNS server. Each DNS server has limited
information about host names and ip address. Almost all DNS server constantly query each other to
get information using root servers.
Each computer is configured to query specific name server. Usually home computers are configure
to query ISP name servers or free dns name servers. Here is a typical UNIX / Linux /etc/resolv.conf
file with nameserver IP address:
$ cat /etc/resolv.conf
Sample output:
nameserver 208.67.222.222
nameserver 208.67.220.220
Each application can find www.yahoo.com IP address by sending a request to 208.67.222.222 or
208.67.220.220 IP address. This procedure is called hostname resolution and the algorithm that
performs this operation is called the resolver. Let us see how to find out IP address for
b.sadhiq
201 www.altnix.com
freebsd.nixcraft.in hostname:
1. The web browser will check local cache database to find out answer. If it can get an answer
directly from these, it proceeds no further.
2. Otherwise request will be sent to nameserver IP 208.67.222.222 to find IP address for
freebsd.nixcraft.in host.
3. 208.67.222.222 server will decide if that IP has been recently looked up before. If it has, there
is no need to ask further, since the result would be stored in a local cache.
4. 208.67.222.222 will see if the domain is local. I.e. if it is a computer that it has direct
information about. In this case this would only be true if the 208.67.222.222 were Obsidian's very
own name server.
5. 208.67.222.222 will strip out the TLD (Top Level Domain) .in It will query a root name
server, asking what name server is responsible for .IN. Depend upon the answer 208.67.222.222 will
query authoritative server for IP address.
6. 208.67.222.222 will return the result to the application.
7. 208.67.222.222 will store each of these results in a local cache with an expiry date. To avoid
having to look them up a second time.
Caching nameserver
To run a caching-only name server, the following files are required and must be created or copied to
the appropriate directories on your server.
options {
directory "/var/named";
forwarders { 208.164.186.1; 208.164.186.2; };
forward only;
};
//
// a caching only nameserver config
zone "." in {
type hint;
file "db.cache";
};
zone "0.0.127.in-addr.arpa" in {
type master;
file "db.127.0.0";
};
In the forwarders line, 208.164.186.1 and 208.164.186.2 are the IP addresses of your Primary
Master and Secondary Slave DNS server. They can also be the IP addresses of your ISPs DNS server
and another DNS server, respectively.
TIP: To improve the security of your BIND/DNS server you can stop it from even trying to contact
an off-site server if their forwarder is down or doesn't respond. With the forward only option set in
your named.conf file, the name server doesn't try to contact other servers to find out information if
the forwarder doesn't give it an answer.
To configure the /var/named/db.127.0.0 file for a simple caching name server,you can use this
configuration for all machines on your network that don't act as a master or slave name server. The
db.127.0.0 file covers the loopback network. Create the following files in /var/named/, touch
/var/named/db.127.0.0 and add the following lines in the file:
$TTL 345600
b.sadhiq
204 www.altnix.com
@ IN SOA localhost. root.localhost. (
00 ; Serial
86400 ; Refresh
7200 ; Retry
2592000 ; Expire
345600 ) ; Minimum
IN NS localhost.
1 IN PTR localhost.
Configure the /var/named/db.cache file for a simple caching name server before starting your DNS
server. You must take a copy of db.cache file and copy this file to the /var/named/ directory. The
db.cache tells your server where the servers for the root zone are.
Use the following commands on another Unix computer in your organization to query a new
db.cache file for your DNS Server or pick one from your Red Hat Linux CD-ROM source
distribution:
options {
b.sadhiq
205 www.altnix.com
directory "/var/named";
fetch-glue no;
recursion no;
allow-query { 208.164.186/24; 127.0.0/8; };
allow-transfer { 208.164.186.2; };
transfer-format many-answers;
};
zone "186.164.208.in-addr.arpa" in {
type master;
file "db.208.164.186";
};
b.sadhiq
206 www.altnix.com
The fetch-glue no option can be used in conjunction with the option recursion no to prevent the
server's cache from growing or becoming corrupted. Also, disabling recursion puts your name
servers into a passive mode, telling it never to send queries on behalf of other name servers or
resolvers. A non-recursive name server is very difficult to spoof, since it doesn't send queries, and
hence doesn't cache any data.
In the allow-query line, 208.164.186/24 and 127.0.0/8 are the IP addresses allowed to ask ordinary
questions to the server.
In the allow-transfer line, 208.164.186.2 is the IP address allowed to receive zone transfers from
the server. You must ensure that only your real slave name servers can transfer zones from your
name serve, as the information provided is often used by spammers and IP spoofers.
NOTE: The options recursion no, allow-query, and allow-transfer in the named.conf file above are
security features.
To configure the /var/named/db.127.0.0 file for a master and slave name server, you can use this
configuration file by both a master name server and a slave name server. The db.127.0.0 file covers
the loopback network. Create the following files in /var/named/.
b.sadhiq
207 www.altnix.com
; Name Server (NS) records.
NS deep.altnix.com.
NS mail.altnix.com.
To configure the /var/named/db.208.164.186 file for a master name server, Use this configuration for
the server on your network that acts as a master name server. The file db.208.164.186 maps host
names to addresses. Create the following files in /var/named/.
b.sadhiq
208 www.altnix.com
86400 ; Refresh
7200 ; Retry
2592000 ; Expire
345600 ) ; Minimum
; Name Server (NS) records.
NS deep.altnix.com.
NS mail.altnix.com.
; Addresses Point to Canonical Names (PTR) for Reverse lookups
1 PTR deep.altnix.com.
2 PTR mail.altnix.com.
3 PTR www.altnix.com.
To configure of the /var/named/db.altnix file for a master name server, use this configuration for the
server on your network that acts as a master name server. The file db.altnix maps addresses to host
names. Create the following file in /var/named/.
Create the db.altnix file touch /var/named/db.altnix and add:
b.sadhiq
209 www.altnix.com
; Mail Exchange (MX) records.
MX 0 mail.altnix.com.
; Address (A) records.
localhost A 127.0.0.1
deep A 208.164.186.1
mail A 208.164.186.2
www A 208.164.186.3
; Aliases in Canonical Name (CNAME) records.
;www CNAME deep.altnix.com.
To configure the /var/named/db.cache file for a master and slave name servers Before starting your
DNS server you must take a copy of the db.cache file and copy it into the /var/named/ directory. The
db.cache tells your server where the servers for the root zone are.
Use the following command on another Unix computer in your organization to query a new db.cache
file for your DNS Server or pick one from your Red Hat Linux CD-ROM source distribution:
Don't forget to copy the db.cache file to the /var/named/ directory on your server where you're
installing DNS server after retrieving it over the Internet.
Dig
http://www.madboa.com/geek/dig/
NS Trace
http://www.dollardns.net/cgi-bin/nstrace/index.pl?
Dns Crawler
http://www.dollardns.net/cgi-bin/dnscrawler/index.pl?
who is
http://whois.dollardns.net/domain.pl?
http://www.dollardns.net/index.html?http://www.dollardns.net/compare.html
b.sadhiq
210 www.altnix.com
Apache
What is Apache?
- Apache is an open-source web server (HTTP server)
- Apache servers up web pages (HTML) and most any other content that can be accessed via a web
browser.
- It runs on Linux and Windows
- Apache is the number one web server on the internet today and has been since 1996.
- Infact, 70% of all Internet websites run on Apache webserver
Lets talk about the history of apache web server
It was originally developed at National Center for Supercomputing Applications at the university of
illinois by Robert McCool in around 1991. But it quickly became the public domain.
Robert McCool left NCSA in 1995 and httpd web server was not maintained officially, so loosely
organised group of developes around the world came together to exchange patches, updates and
fixes.
- Apache was created around 1995 by a group of webmasters who got together to create patches for
the old httpd server in Unix.
- Today, that group is called the Apache Software Foundation.
- The homepage for Apache is http://httpd.apache.org
The source and compiled versions of Apache are all free
Why Apache
- high Performance
- Open Source
- Free
- Unrestrictive license
- can be modified and redistribute with another name
i.e Covalent Apache, IBM http server
- Runs on linux/unix/windows
Apache HTTP Server
version 1.3
Version 2.0
+ multi-thread support (SMP)
b.sadhiq
211 www.altnix.com
+ support for non-unix platforms using MPM modules
- MPM --> Multi-Processing Modules
+ supports IPV6
+ new API
- meant easier for writing 3rd party modules
This approach has an obvious advantage over Web servers that use a single process to respond to all
requests: If the Internet Information Server (IIS) process dies on a Windows Web server, no further
requests are served until the process is restarted. If a single Apache process dies, only the request
being served by that process is affected
Installing Apache
- RedHat Linux or Fedora Linux installs Apache web server, by default. You can ofcourse, choose
not to install it at installation time if you wish.
- Apache is, however, disabled by default. In other words, it isnt running.
- RHEL or fedora does not, however, install the 31 optional related applications that go along with
Apache. You can install these from the Package Manager application.
Administering Apache
- The Apache web server has its own configuration directory, /etc/httpd/
- Inside this directory, there are subdirectories and soft links to other directories.
- The actual configuration file for Apache is at /etc/httpd/conf/ and it is called httpd.conf
- All log files go to /var/log/httpd
- The httpd.conf can be edited manually.
b.sadhiq
212 www.altnix.com
- Apache can be started manually by typing "httpd" at the command line but the recommended way
to start it is using the Services program.
- The "DocumentRoot" directory (/var/www/htm/) is where the default website is located.
- Apache can also deliver user's webpages. This is defined with the "UserDir" directive and users
directories can be acccessed, through
Apache like this:
http://www.altnix.com/~sadhiq
- Apache can also support Virtual hosting with the "VirtualHost" directive
- web server servers many different websites
- Apache cannot be run from a Super Server like XinetD.
- To administer apache, GUI tool is available: system-config-httpd
SPECS
Package: httpd
Version: 2.2
Conf file: /etc/httpd/conf/httpd.conf
DocumentRoot for storing web pages: /var/www/html
Apache Modules: /etc/httpd/modules/
/usr/lib/httpd/modules/
Commands:
1. Syntax Check
$ apachectl configtest
OR
$ httpd -t
2. check compiled in modules
$ httpd -l
Service:
$ service httpd start
OR
$ apachectl start
Four main directories:
- /etc/httpd/conf/
- /etc/httpd/conf.d/
- /etc/httpd/logs/
b.sadhiq
213 www.altnix.com
--> sym link to /var/log/httpd/
/etc/httpd/modules
b.sadhiq
214 www.altnix.com
Dedicated webserver configuration: altnix.com
$ cp /etc/httpd/conf/httpd.conf /etc/httpd/conf/httpd.conf_ORIG
$ vi /etc/httpd/conf/httpd.conf
----------------------------------------------
ServerRoot "/etc/httpd" #default
Listen 80 #default
ServerName www.altnix.com:80
ServerAdmin john@altnix.com
User apache #default
Group apache #default
# set folder for the webpages
DocumentRoot "/var/www/html"
# set the name of the file that is first read
DirectoryIndex index.html
apachectl configtest
OR
$ httpd -t
Start the http service
$ service httpd restart
OR
$ apachectl restart
Confirm the httpd daemon running on port 80
$ netstat -antp | grep :80
Test from the client machine
linux> links http://www.altnix.com
make httpd start on bootup
$ chkconfig --level 35 httpd on
What is Virtual Hosting
b.sadhiq
215 www.altnix.com
Virtual Hosting is the ability host multiple separate web sites with one Apache Server
Each site is separate from each other, with different DocumentRoot, log files, permissions, etc
Two types of Virtual Hosting
1. IP based virtual Hosts
- Each IP corresponds to its own individual website
- IP based VH is where each virtual host has its own IP address
+ single server
--> One Apache Daemon, handling multiple websites
+ multiple server
--> Two or more independent Apache daemons, each one handling a specific website
2. Name based virtual hosts
- Name based virtual hosts is used for hosting multiple websites on the same webserver IP
address.
Name Based Virtual Hosting
Scenario:
+ champu.local
+ funny.local
- both the websites running on the same ip address
- in my case: 192.168.10.111
Note: Make sure the DNS 'A' record of champu.local and funny.local should resolve to
192.168.10.111
$ mkdir /var/www/html/champu.local
$ cd /var/www/html/champu.local
$ vi index.html
---------------------------------------------------------------
<html>
<head>
<title>
MY FIRST HTML PAGE: CHAMPU: NAME BASED
</title>
<body>
MY FIRST HTML PAGE: CHAMPU: NAME BASED
</body>
</html>
$ mkdir /var/www/html/funny.local
$ cd /var/www/html/funny.local
$ vi index.html
---------------------------------------------------------------
<html>
<head>
<title>
b.sadhiq
216 www.altnix.com
MY FIRST HTML PAGE: FUNNY: NAME BASED
</title>
<body>
MY FIRST HTML PAGE: FUNNY: NAME BASED
<
</body>
</html>
----------------------------------------------
$ vi /etc/httpd/conf/httpd.conf
----------------------------------------------
NameVirtualHost 192.168.10.111:80
<VirtualHost 192.168.10.111:80>
ServerAdmin webmaster@champu.local
DocumentRoot /var/www/html/champu.local/
ServerName champu.local
ErrorLog logs/champu.local_error_log
CustomLog logs/champu.local_access_log common
</VirtualHost>
<VirtualHost 192.168.10.111:80>
ServerAdmin webmaster@funny.local
DocumentRoot /var/www/html/funny.local
ServerName funny.local
ErrorLog logs/funny.local_error_log
CustomLog logs/funny.local_access_log common
</VirtualHost>
----------------------------------------------
$ httpd -t
$ service httpd restart
IP Based Virtual Hosting: Single Server Configuration
Note: Make sure the DNS 'A' record of champu.local should resolve to 192.168.10.11
b.sadhiq
217 www.altnix.com
and funny.local to 192.168.10.222
Scenario:
+ champu.local
Resolves to 192.168.10.111
+ funny.local
R
Resolves to 192.168.10.222
1. Create IP alias
# cd /etc/sysconfig/network-scripts/
# cp ifcfg-eth0 ifcfg-eth0:0
# vi ifcfg-eth0:0
DEVICE=eth0:0
BOOTPROTO=static
BROADCAST=192.168.10.255
IPADDR=192.168.10.222
NETMASK=255.255.255.0
NETWORK=192.168.10.0
O
ONBOOT=yes
b.sadhiq
218 www.altnix.com
NameVirtualHost 192.168.10.111:80
<VirtualHost 192.168.10.111:80>
ServerAdmin webmaster@champu.local
DocumentRoot /var/www/html/champu.local/
ServerName champu.local
ErrorLog logs/champu.local_error_log
CustomLog logs/champu.local_access_log common
</VirtualHost>
NameVirtualHost 192.168.10.222:80
<VirtualHost 192.168.10.222:80>
ServerAdmin webmaster@funny.local
DocumentRoot /var/www/html/funny.local
ServerName funny.local
ErrorLog logs/funny.local_error_log
CustomLog logs/funny.local_access_log common
</VirtualHost>
$ cd /var/www/html/champu.local
$ vi index.html
------------------------------------------------------------
<html>
<head>
<title>
MY FIRST HTML PAGE: CHAMPU: IP BASED: 192.168.10.111
</title>
<body>
MY FIRST HTML PAGE: CHAMPU: IP BASED: 192.168.10.111
b.sadhiq
219 www.altnix.com
</body>
</html>
$ cd /var/www/html/funny.local
$ vi index.html
---------------------------------------------------------------
<html>
<head>
<title>
MY FIRST HTML PAGE: FUNNY: IP BASED: 192.168.10.222
</title>
<body>
MY FIRST HTML PAGE: FUNNY: IP BASED: 192.168.10.222
<
</body>
</html>
$ service httpd restart
================================================
b.sadhiq
220 www.altnix.com
1. vi /etc/httpd/conf/httpd.conf
----------------------------------------------
Listen 192.168.10.111:80
Listen 192.168.10.222:8080
NameVirtualHost 192.168.10.111:80
<VirtualHost 192.168.10.111:80>
ServerAdmin webmaster@champu.local
DocumentRoot /var/www/html/champu.local/
www.alnix.com
ServerName champu.local
ErrorLog logs/champu.local_error_log
CustomLog logs/champu.local_access_log common
</VirtualHost>
NameVirtualHost 192.168.10.222:8080
<VirtualHost 192.168.10.222:8080>
ServerAdmin webmaster@funny.local
DocumentRoot /var/www/html/funny.local
ServerName funny.local
ErrorLog logs/funny.local_error_log
CustomLog logs/funny.local_access_log common
</VirtualHost>
$ cd /var/www/html/champu.local
$ vi index.html
---------------------------------------------------------------
<html>
<head>
<title>
b.sadhiq
221 www.altnix.com
MY FIRST HTML PAGE: CHAMPU: IP BASED: 192.168.10.111, PORT: 80
</title>
<body>
MY FIRST HTML PAGE: CHAMPU: IP BASED: 192.168.10.111, PORT: 80
</body>
</html>
4. # cd /var/www/html/funny.local
5. # vi index.html
---------------------------------------------------------------
<html>
<head>
<title>
MY FIRST HTML PAGE: FUNNY: IP BASED: 192.168.10.222, PORT: 8080
</title>
<body>
MY FIRST HTML PAGE: FUNNY: IP BASED: 192.168.10.222, PORT: 8080
</body>
</html>
b.sadhiq
222 www.altnix.com
$ vi index.html
---------------------------------------------------------------
<html>
<head>
<title>
CHAMPU: RESTRICTED ACCESS PAGE
</title>
<body>
CHAMPU: RESTRICTED ACCESS PAGE
</body>
</html>
$ vi /etc/httpd/conf/httpd.conf
-------------------------------------------------
<VirtualHost 192.168.10.111:80>
ServerAdmin webmaster@champu.local
DocumentRoot /var/www/html/champu.local/
ServerName champu.local
<Directory /var/www/html/champu.local/noaccess>
AuthName "Restricted Site"
AuthType Basic
AuthUserFile /var/www/html/champu.local/.passwords
require valid-user
</Directory>
ErrorLog logs/champu.local_error_log
CustomLog logs/champu.local_access_log common
</VirtualHost>
-------------------------------------------------
Notes:
b.sadhiq
223 www.altnix.com
Basic – Standard username/password combination.
Digest – MD5 encrypted username/password combinations.
$ htpasswd -c /var/www/html/champu.local/.passwords champu
--> Give access to user john also
--> adds the user "john" to the password file
/var/www/html/champu.local/.passwords
$ htpasswd -m /var/www/html/champu.local/.passwords john
Test
$ links http://www.champu.local/noaccess
--> Prompts for username/passwd
Troubleshooting Apache
Checking the Logs
If there is something wrong with your Apache, but you have no idea how to figure out what's wrong,
your first clues will be in the log files.
There are a few log files around. All of them are located inside /var/log/httpd/
access_log
67.185.0.236 - - [18/Jun/2005:12:05:50 -0700] "GET / HTTP/1.0" 200 721
10.0.1.80 - - [18/Jun/2005:12:11:07 -0700] "GET /~jaspenelle/__journal1.jpg HTTP/1.1" 200 19079
66.239.233.163 - - [18/Jun/2005:12:15:06 -0700] "GET /~jaspenelle/avy14.gif HTTP/1.0" 200 1661
67.185.60.155 - - [18/Jun/2005:12:18:48 -0700] "GET / HTTP/1.0" 200 721
67.185.0.236 - - [18/Jun/2005:12:25:39 -0700] "GET / HTTP/1.0" 200 721
10.0.1.80 - - [18/Jun/2005:12:28:04 -0700] "GET /~jaspenelle/avy14.gif HTTP/1.1" 200 1661
exist: /var/www/localhost/htdocs/favicon.ico
[Sat Jun 18 13:02:24 2005] [error] [client 10.0.1.80] File does not
exist: /var/www/localhost/htdocs/favicon.ico
As you can see, this file can contain a lot of stuff, depending on the ErrorLevel directive in your
httpd.conf file. It tells you if apache started up correctly, what errors it has run into, ... In general it
will tell you what went wrong. If something isn't working right, this should be the first file you check
b.sadhiq
225 www.altnix.com
for more information.
Performance tunning
The Apache HTTP Server is a modular program where the administrator can choose the functions to
be included in the server by selecting a set of modules [2]. The modules can be compiled either
statically as part of the 'httpd' binary, or as Dynamic Shared Objects (DSOs). DSO modules can
either be compiled when the server is built, or added later via the apxs utility, which allows
compilation at a later date. The mod_so module must be statically compiled into the Apache core to
enable DSO support.
Run Apache with only the required modules. This reduces the memory footprint, which improves the
server performance. Statically compiling modules will save RAM that's used for supporting
dynamically loaded modules, but you would have to recompile Apache to add or remove a module.
This is where the DSO mechanism comes handy. Once the mod_so module is statically compiled,
any other module can be added or dropped using the 'LoadModule' command in the 'httpd.conf' file.
Of course, you will have to compile the modules using 'apxs' if they weren't compiled when the
server was built.
b.sadhiq
226 www.altnix.com
The worker MPM uses multiple child processes. It's multi-threaded within each child, and each
thread handles a single connection. Worker is fast and highly scalable and the memory footprint is
comparatively low. It's well suited for multiple processors. On the other hand, worker is less tolerant
of faulty modules, and a faulty thread can affect all the threads in a child process.
The prefork MPM uses multiple child processes, each child handles one connection at a time.
Prefork is well suited for single or double CPU systems, speed is comparable to that of worker, and
it's highly tolerant of faulty modules and crashing children - but the memory usage is high, and more
traffic leads to greater memory usage.
Multi Thread
a thread is a process-within-a-process. Multiple threads reside within a single process. Threading has
several advantages:
The administrator must ensure that enough processes are available to handle incoming requests
without forking new ones, but not so many that the system hits resource limits. Several directives in
the Apache configuration file accomplish this:
The MaxClients setting limits the number of Apache processes that will be created.
Typically, memory is the limitation on this setting. If your Apache process takes up 20 MB of
memory, and you have 1000MB of free RAM, you could have up to 50 Apache processes
(1000MB/20MB = 50).
The MinSpareServers and MaxSpareServers settings keep a number of processes waiting
around, to avoid the delay imposed by forking a new process. New processes are forked continually
to keep the number of available servers between these thresholds, but incoming HTTP requests do
not have to wait for processes to be forked because spares are available.
To account for differences between platforms, while retaining the reliability of multiple processes,
Apache 2.0 provides several different models for controlling Apache processes and threads in the
form of Multi-Processing Modules (MPMs):
The prefork MPM replicates the single-threaded behavior of Apache 1.3. This is the default
MPM for UNIX systems.
The worker MPM "implements a hybrid multithreaded multi-process Web server." Several
b.sadhiq
227 www.altnix.com
processes are started, each with a fixed number of threads. Processes are started or stopped as
necessary to regulate the total number of threads.
The perchild MPM regulates the total number of threads by varying the number of threads in
each process. This MPM also allows Apache processes to operate as multiple user IDs, which can be
useful for managing several virtual hosts.
http://httpd.apache.org/docs/2.0/mpm.html
http://httpd.apache.org/docs/2.0/mod/worker.html
http://httpd.apache.org/docs/2.0/misc/perf-tuning.html
http://httpd.apache.org/docs/2.2/mod/prefork.html
http://books.google.co.in/books?
id=cnDuw7GV4uYC&pg=PA180&lpg=PA180&dq=Difference+between++worker+MPM+
%26+prefork+MPM&source=web&ots=4hKq5VQwf&sig=HocOBWL7lUwRrWjup1cp7sbf4eI&hl
=en&sa=X&oi=book_result&resnum=5&ct=result#PPA186,M1
http://tldp.org/LDP/LGNET/123/vishnu.html#MPM
http://www.howtoforge.com/configuring_apache_for_maximum_performance
keepalive and keepalivetimeout :-
KeepAlive:-
This directive is taking "on"/"off" as parameter. In simple term - whether you want to use the feature
or not. For example, once you visit a site (www.someting.com), there would be a number of
connection from your machine to the remote machine (on port 80). Once the browse finished fetcing
pages, the socket will be closed (if KeepAlive off). If you click on a link on that page, another
connection will be initiated. Remember that opening/closing socket will require some overhead
from OS, and Apache itself (same thing with closing the sockets).
KeepAliveTimeout:-
KeepAliveTimeout will determin howlog a persistent connection will be kept open.
The number of seconds Apache will wait for a subsequent request before closing the connection.
Once a request has been received, the timeout value specified by the Timeout directive
applies.Setting KeepAliveTimeout to a high value may cause performance problems in heavily
loaded servers. The higher the timeout, the more server processes will be kept occupied waiting on
connections with idle clients.
b.sadhiq
228 www.altnix.com
Apache monitoring
wtop search for it
is a tool for benchmarking your Apache HTTP server.
apachectl
is a front end to the Apache HTTP server which is designed to help the administrator control the
functioning of the Apache httpd daemon.
apxs
is a tool for building and installing extension modules for the Apache HTTP server.
dbmanage
is used to create and update the DBM format files used to store usernames and passwords for basic
authentication of HTTP users.
htdigest
is used to create and update the flat-files used to store usernames, realms and passwords for digest
authentication of HTTP users.
htpasswd
is used to create and update the flat-files used to store usernames and passwords for basic
authentication of HTTP users.
httpd
is the Apache HTTP server program.
instdso.sh
is a script which installs Apache DSO modules.
logresolve
is a post-processing program to resolve IP-addresses in Apache's access log files.
b.sadhiq
229 www.altnix.com
rotatelogs
is a simple program for use in conjunction with Apache's piped log file feature.
SendMail
Sendmail uses macrotize language called m4, As sendmail configuration is so complex its
abstracted to macro utility using m4
- /etc/mail/sendmail.mc
You can make changes and configure in sendmail.mc
Sendmail.mc is much easier to understand
- /etc/mail/sendmail.cf
Sendmails main configuration file
- /etc/mail/sendmail.mc
dnl --> way of commenting
b.sadhiq
231 www.altnix.com
Sendmail is separated into two daemons:
$ ps -ax | grep -i sendmail
- Once accepts connection
e.g. sendmail: accepting connections on port 25
- Other runs the queue
e.g. sendmail: Queue runner@01:00:00 for /var/spool/clientmqueue
01:00 --> 1 minute
- /etc/mail/virtusertable
Allows us to setup virtual domains
e.g.
champu@postfix.altnix.com champu (local account)
ACL
Allow for altnix.com
Allow for 192.168.10.0/24
Deny for dummy.org
b.sadhiq
234 www.altnix.com
$ vi /etc/mail/access
@altnix.com RELAY
192.168.10. RELAY
@dummy.org REJECT
$ postmap /etc/mail/access
$ service sendmail restart
Q: Mail alias
A: modify /etc/aliases, run newaliases
Q: Receive mail for altnix.com
A: modify sendmail mc as above, and add domain to /etc/mail/local-host-names
$ vi /etc/mail/local-host-names
altnix.com
Debugging:
mail -v root
mailq, mailq -Ac
sendmail -q
tail -f /var/log/maillog
Testing:
note: root is not permitted to login
b.sadhiq
235 www.altnix.com
echo "pop" | mail -s test student
b.sadhiq
236 www.altnix.com
$ m4 /etc/mail/sendmail.mc > /etc/mail/sendmail.cf
$ chkconfig --level 35 sendmail on
$ service sendmail restart
1. Edit /etc/mail/local-host-names
altnix.com
2. $ vi /etc/mail/sendmail.mc
dnl DAEMON_OPTIONS(`Port=smtp,Addr=127.0.0.1, Name=MTA')dnl
$ m4 /etc/mail/sendmail.mc > /etc/mail/sendmail.cf
$ vi /etc/mail/access
192.168.10 RELAY
@dummy.com REJECT
$ chkconfig --level 35 sendmail on
$ service sendmail restart
$ Edit /etc/dovecot.conf
b.sadhiq
237 www.altnix.com
protocols = imap imaps pop3 pop3s
$ chkconfig --level 35 dovecot on
$ service dovecot restart
$ vi /etc/aliases
user5: user2
$ newaliases
b.sadhiq
238 www.altnix.com
$ cd /usr/share/ssl/certs
$ mv dovecot.pem dovecot.pem_OLD
$ make dovecot.pem
Country Name (2 letter code) [GB]:IN
State or Province Name (full name) [Berkshire]:Maharashtra
Locality Name (eg, city) [Newbury]:Mumbai
Organization Name (eg, company) [My Company Ltd]:Altnix
Organizational Unit Name (eg, section) []:Admin
Common Name: champu.altnix.com
$ cp dovecot.pem /usr/share/ssl/private
$ service dovecot restart
$ netstat -antp | grep :143
$ netstat -antp | grep :110
$ netstat -antp | grep :993
$ netstat -antp | grep :995
$ chkconfig --level 35 dovecot on
Fighting SPAM
There are many publicly available lists of known open mail relay
servers and spam generating mail servers on the Internet. Some are
maintained by volunteers, others are managed by public companies,
but in all cases they rely heavily on complaints from spam victims.
Some spam blacklists simply try to determine whether the e-mail is
coming from a legitimate IP address.
b.sadhiq
239 www.altnix.com
The IP addresses of offenders usually remain on the list for six
months to two years. In some cases, to provide additional pressure
on the spammers, the blacklists include not only the offending IP
address but also the entire subnet or network block to which it
belongs. This prevents the spammers from easily switching their
servers' IP addresses to the next available ones on their networks.
Also, if the spammer uses a public data center, it is possible that
their activities could also cause the IP addresses of legitimate e-
mailers to be black listed too. It is hoped that these legitimate
users will pressure the data center's management to evict the
spamming customer.
You can configure sendmail to use its dnsbl feature to both query
these lists and reject the mail if a match is found. Here are some
sample entries you can add to your /etc/sendmail.mc file; they
should all be on one line.
Be sure to visit the URLs listed to learn more about the individual
services.
Spamassassin
b.sadhiq
240 www.altnix.com
spamassassin doesn't delete spam, it merely adds the word "spam" to
the beginning of the subject line of suspected spam e-mails. You
can then configure the e-mail filter rules in Outlook Express or
any other mail client to either delete the suspect message or store
it in a special Spam folder.
Most RedHat and Fedora Linux software products are available in the
RPM format. When searching for the RPMs, remember that the filename
usually starts with the software package name and is followed by a
version number, as in spamassassin-2.60-2.i386.rpm. (For help
downloading, see Chapter 6, "Installing RPM Software").
Starting Spamassassin
b.sadhiq
241 www.altnix.com
Configuring Spamassassin
###################################################################
# See 'perldoc Mail::SpamAssassin::Conf' for
# details of what can be adjusted.
###################################################################
#
# These values can be overridden by editing
# ~/.spamassassin/user_prefs.cf (see spamassassin(1) for details)
#
# How many hits before a message is considered spam. The lower the
# number the more sensitive it is.
required_hits 5.0
ok_languages en
# Mail using locales used in these country codes will not be marked
# as being possibly spam in a foreign language.
ok_locales en
Testing spamassassin
You can test the validity of your local.cf file by using the
spamassassin command with the --lint option. This will list any
syntax problems that may exist. In this example two errors were
found and corrected before the command was run again.
b.sadhiq
243 www.altnix.com
[root@bigboy tmp]# vi /etc/mail/spamassassin/local.cf
...
...
...
[root@bigboy tmp]# spamassassin -d --lint
[root@bigboy tmp]
Startup spamassassin
SQUID
Squid is...
Offshoot of the Harvest Project
a full-featured Web/FTP Proxy and Cache server
designed to run on Unix systems
free, open-source software
the result of many contributions by unpaid (and paid) volunteers
b.sadhiq
244 www.altnix.com
With Squid you can...
Use less bandwidth on your connection when surfing the Web
Reduce latency [time taken to load a web page]
Protect hosts on your intranet by proxying their web traffic
Collect statistics about web traffic on your NW
Prevent uses from visiting illegal sites
Ensure that valid users only surf the net
Enhance user's privacy by filtering sensitive info from web requests
Reduce the load on your web servers
Convert encrypted [HTTPS] requests on one side, to unencrypted [HTTP]
requests on the other
Squid supports...
proxying and caching of HTTP, FTP, and other
proxying for SSL encrypted traffic
cache hierarchies
ICP, HTCP, CARP, Cache Digests
transparent caching
WCCP (Squid v2.3 and above)
extensive sophisticated access controls
HTTP server acceleration aka surrogate mode
HTTP Interception Caching
SNMP
caching of DNS lookups
b.sadhiq
245 www.altnix.com
URL redirection
traffic shaping
numerous external auth modules
advanced disk storage options
Proxy servers
A proxy server is a machine which acts as an intermediary between the computers of a local area
network (sometimes using protocols other than TCP/IP) and the Internet
Most of the time the proxy server is used for the web, and when it is, it's an HTTP proxy. However,
there can be proxy servers for every application protocol (FTP, etc.).
b.sadhiq
246 www.altnix.com
Caching
Most proxies have a cache, the ability to keep pages commonly visited by users in memory (or "in
cache"), so they can provide them as quickly as possible. Indeed, the term "cache" is used often in
computer science to refer to a temporary data storage space (also sometimes called a "buffer.")
A proxy server with the ability to cache information is generally called a "proxy-cache server".
The feature, implemented on some proxy servers, is used both to reduce Internet bandwidth use and
to reduce document loading time for users.
Nevertheless, to achieve this, the proxy must compare the data it stores in cached memory with the
remote data on a regular basis, in order to ensure that the cached data is still valid.
Filtering
What's more, by using a proxy server, connections can be tracked by creating logs for systematically
recording user queries when they request connections to the Internet
Because of this, Internet connections can be filtered, by analysing both client requests and server
replies. When filtering is done by comparing a client's request to a list of authorised requests, this is
called whitelisting, and when it's done with a list of forbidden sites, it's called blacklisting. Finally,
analysing server replies that comply with a list of criteria (such as keywords) is called content
filtering.
Authentication
As a proxy is an indispensable intermediary tool for internal network users who want to access
external resources, it can sometimes be used to authenticate users, meaning to ask them to identify
themselves, such as with a username and password. It is also easy to grant access to external
resources only to individuals authorised to do so, and to record each use of external resources in log
files.
This type of mechanism, when implemented, obviously raises many issues related to individual
liberties and personal rights.
Reverse-proxy servers
A reverse-proxy is a "backwards" proxy-cache server; it's a proxy server that, rather than allowing
internal users to access the Internet, lets Internet users indirectly access certain internal servers.
The reverse-proxy server is used as an intermediary by Internet users who want to access an internal
b.sadhiq
247 www.altnix.com
website, by sending it requests indirectly. With a reverse-proxy, the web server is protected from
direct outside attacks, which increases the internal network's strength. What's more, a reverse-proxy's
cache function can lower the workload if the server it is assigned to, and for this reason is sometimes
called a server accelerator.
Finally, with perfected algorithms, the reverse-proxy can distribute the workload by redirecting
requests to other, similar servers; this process is called load balancing.
$ squid -k
===========> parse Check if squid.conf is OK and syntax-free
===========> check Check if SQUID is running
===========> reconfigure Re-Read squid.conf w/o stopping [refresh]
aka service squid reload
rotate rotate the log files
b.sadhiq
248 www.altnix.com
Done by SQUID initscript anyway [start()]
Already specified in /etc/sysconfig/squid
which is read by squid's initscript
$ squid -N -d1 -D Run SQUID with logging to stderr in the fg not bg with level 1
debugging
-N -> Keep SQUID in the fg Does not read /etc/sysconfig/squid
-d1 -> Display level 1 debugging to stderr
-D -> Don't bother with DNS and die since Squid tries to do DNS lookups for a few common
domains, and dies with an error if it is not able to resolve them thru /etc/resolv.conf See Directive
#11 dns_testnames
LOCAL4 syslog facility Level 0 debug msgs are logged with priority
LOG_WARNING Level 1 debug msgs are logged with priority
LOG_NOTICE
Access Controls are the most imp part of your SQUID config file. You will use them to grant
access to authorized users and keep out the bad guys.
You can use them to restrict, or prevent access to, certain material; to control request rewriting; to
route requests thru a hierarchy; and to support different qualities of service.
b.sadhiq
249 www.altnix.com
2. Access Control Rules or RULES or ACRs [These rules appy to particular services or ops within
SQUID eg http_access rules are applied to incoming HTTP requests]
i.e Access Control = ACL Elements + AC Rules
= ACLEs + ACRs
=========================
Restrict sites
Search for `Access Controls' and append following two lines:
acl blocksites dstdomain .gmail.com
http_access deny blocksites
b.sadhiq
252 www.altnix.com
Save and close the file. Restart Squid:
$ /etc/init.d/squid restart
Restrict word
Let us say you would like to deny access for anyone who browses to a URL with the word "bar" in
it. Append following ACL:
acl blockregexurl url_regex -i porn
http_access deny blockregexurl
Restricting Web Access By Time
You can create access control lists with time parameters. For example, you can allow only business
hour access from the home network, while always restricting access to host 192.168.1.23.
Add this to the bottom of the ACL section of squid.conf
acl home_network src 192.168.1.0/24
acl business_hours time M T W H F 9:00-17:00
acl RestrictedHost src 192.168.1.23
b.sadhiq
253 www.altnix.com
Restrict .exe .mp3 .avi with customized error page
Now add following lines to your squid ACL section:
acl blockfiles urlpath_regex "/etc/squid/blocks.files.acl"
You want display custom error message when a file is blocked:
# Deny all blocked extension
deny_info ERR_BLOCKED_FILES blockfiles
http_access deny blockfiles
Now create /etc/squid/blocks.files.acl file:
# vi /etc/squid/blocks.files.acl
Append following text:
\.[Ee][Xx][Ee]$
\.[Aa][Vv][Ii]$
\.[Mm][Pp][Gg]$
\.[Mm][Pp][Ee][Gg]$
\.[Mm][Pp]3$
Create custom error message HTML file called ERR_BLOCKED_FILES in /etc/squid/error/
directory or /usr/share/squid/errors/English directory.
# vi ERR_BLOCKED_FILES
Append following content:
<HTML>
<HEAD>
<TITLE>ERROR: Blocked file content</TITLE>
</HEAD>
<BODY>
<H1>File is blocked due to new IT policy</H1>
<p>Please contact helpdesk for more information:</p>
Phone: 555-12435 (ext 44)<br>
Email: helpdesk@yourcorp.com<br>
Caution: Do not include HTML close tags </HTML> </BODY> as it will be closed by squid.
b.sadhiq
254 www.altnix.com
Restrict Port
Locate your ACL section and add configuration directive as follows:
acl block_port port 1234
http_access deny block_port
http_access allow all
If you just want to skip a particular IP (192.168.1.5) try as follows:
acl block_port port 1234
acl no_block_port_ip src 192.168.1.5
http_access deny block_port !no_block_port_ip
http_access allow all
Close and save the file.
Restart squid proxy server:
$ /etc/init.d/squid restart
And finally deny all other access to this proxy
http_access allow localhost
Squid Bandwidth Limiting (Bandwidth Throtling)
How to Write Delay Pool Script in Squid
In this scenario we are going to creat 3 pools
128kbp
256kbps
Unlimited for Admin
delay_pools 3
delay_class 1 2
delay_access 1 deny admin
delay_access 1 deny 256kbps
delay_access 1 allow 128kbps
delay_parameters 1 -1/-1 16000/16000
delay_class 2 2
delay_access 2 deny admin
delay_access 2 allow 256kbps
delay_parameters 2 -1/-1 32000/32000
delay_class 3 2
delay_access 3 allow admin
delay_parameters 3 -1/-1 -1/-1
b.sadhiq
255 www.altnix.com
Vi/Vim Examples
what is VI-editor ?
While in vi you can run AIX commands without exiting the editing session. The! creates a
shell to execute the command that follows.
1. :!ls will create a shell
2. All files in the current directory are listed. Press return to exit the shell and return to the
vi session or...
3. While still in command mode, issue the :r snacks command
4. The contents of snacks, in this case, are read into the vi file. By default, it will appear
after the current line.
If you need to run a series of commands without returning to vi after the first command is
executed, enter :sh. When you have run all the commands, press to exit the shell
and return to vi.
Cursor movement
h - move left
j - move down
k - move up
l - move right
w - jump by start of words (punctuation considered words)
W - jump by words (spaces separate words)
e - jump to end of words (punctuation considered words)
E - jump to end of words (no punctuation)
b - jump backward by words (punctuation considered words)
B - jump backward by words (no punctuation)
0 - (zero) start of line
^ - first non-blank character of line
$ - end of line
G - Go To command (prefix with number - 5G goes to line 5)
Note: Prefix a cursor movement command with a number to repeat it. For example, 4j moves down
4 lines.
Editing
Visual commands
Exiting
Search/Replace
b.sadhiq
258 www.altnix.com
VI Options:-
vi has many modes of operation. Some of these will affect the way text is presented,
while
others will make editing easier for novice users.
:set all display all settings
:set display settings different than the default
:set ai sets autoindent on
:set noai turns autoindent mode off
:set nu enables line numbers
:set nonu turns line numbers off
:set list displays non-printable characters
:set nolist hides non-printable characters
:set showmode shows the current mode of operation
:set noshowmode hides mode of operation
:set ts=4 sets tabs to 4-character jumps
:set ic ignores case sensitivity
:set noic case sensitive
Search
/word Search “word” from top to bottom
?word Search “word” from bottom to top
/jo[ha]n Search “john” or “joan”
/\< the Search “the”, “theatre” or “then”
/the\> Search “the” or “breathe”
/\ Search “the”
/\ Search all words of 4 letters
/\/ Search “fred” but not “alfred” or “frederick”
/fred\|joe Search “fred” or “joe”
/\ Search exactly 4 digits
/^\n\{3} Find 3 empty lines
:bufdo /searchstr/ Search in all open files
Replace
:%s/old/new/g Replace all occurences of “old” by “new” in file
:%s/old/new/gw Replace all occurences with confirmation
:2,35s/old/new/g Replace all occurences between lines 2 and 35
:5,$s/old/new/g Replace all occurences from line 5 to EOF
:%s/^/hello/g Replace the begining of each line by “hello”
:%s/$/Harry/g Replace the end of each line by “Harry”
:%s/onward/forward/gi Replace “onward” by “forward” , case unsensitive
:%s/ *$//g Delete all white spaces
:g/string/d Delete all lines containing “string”
:v/string/d Delete all lines containing which didn't contain “string”
:s/Bill/Steve/ Replace the first occurence of “Bill” by “Steve” in current line
:s/Bill/Steve/g Replace “Bill” by “Steve” in current line
:%s/Bill/Steve/g Replace “Bill” by “Steve” in all the file
:%s/\r//g Delete DOS carriage returns (^M)
:%s/\r/\r/g Transform DOS carriage returns in returns
b.sadhiq
259 www.altnix.com
:%s#]\+>##g Delete HTML tags but keeps text
:%s/^\(.*\)\n\1$/\1/ Delete lines which appears twice
Ctrl+a Increment number under the cursor
Ctrl+x Decrement number under cursor
ggVGg? Change text to Rot13
4. Case
Vu Lowercase line
VU Uppercase line
g~~ Invert case
vEU Switch word to uppercase
vE~ Modify word case
ggguG Set all text to lowercase
:set ignorecase Ignore case in searches
:set smartcase Ignore case in searches excepted if an uppercase letter is used
:%s/\<./\u&/g Sets first letter of each word to uppercase
:%s/\<./\l&/g Sets first letter of each word to lowercase
:%s/.*/\u& Sets first letter of each line to uppercase
:%s/.*/\l& Sets first letter of each line to lowercase
Read/Write files
:1,10 w outfile Saves lines 1 to 10 in outfile
:1,10 w >> outfile Appends lines 1 to 10 to outfile
:r infile Insert the content of infile
:23r infile Insert the content of infile under line 23
File explorer
:e . Open integrated file explorer
:Sex Split window and open integrated file explorer
:browse e Graphical file explorer
:ls List buffers
:cd .. Move to parent directory
:args List files
:args *.php Open file list
:grep expression *.php Returns a list of .php files contening expression
gf Open file name under cursor
Alignment
:%!fmt Align all lines
!}fmt Align all lines at the current position
5!!fmt Align the next 5 lines
Tabs
:tabnew Creates a new tab
gt Show next tab
:tabfirst Show first tab
:tablast Show last tab
:tabm n(position) Rearrange tabs
:tabdo %s/foo/bar/g Execute a command in all tabs
:tab ball Puts all open files in tabs
Window spliting
:e filename Edit filename in current window
:split filename Split the window and open filename
ctrl-w up arrow Puts cursor in top window
ctrl-w ctrl-w Puts cursor in next window
ctrl-w_ Maximise current window
ctrl-w= Gives the same size to all windows
10 ctrl-w+ Add 10 lines to current window
:vsplit file Split window vertically
:sview file Same as :split in readonly mode
:hide Close current window
:only Close all windows, excepted current
Auto-completion
Ctrl+n Ctrl+p (in insert mode) Complete word
Ctrl+x Ctrl+l Complete line
:set dictionary=dict Define dict as a dictionnary
Ctrl+x Ctrl+k Complete with dictionary
8. Marks
mk Marks current position as k
b.sadhiq
261 www.altnix.com
‘k Moves cursor to mark k
d’k Delete all until mark k
Abbreviations
:ab pr printf("This is a Demo Ver \n"); Define pr as abbreviation of printf("This is a Demo
Ver \n");
Text indent
:set autoindent Turn on auto-indent
:set smartindent Turn on intelligent auto-indent
:set shiftwidth=4 Defines 4 spaces as indent size
ctrl-t, ctrl-d Indent/un-indent in insert mode
>> Indent
<< Un-indent
Syntax highlighting
:syntax on Turn on syntax highlighting
:syntax off Turn off syntax highlighting
:set syntax=perl Force syntax highlighting
How to Exit
:q[uit] Quit Vim. This fails when changes have been made.
:q[uit]! Quit without writing.
:cq[uit] Quit always, without writing.
:wq Write the current file and exit.
:wq! Write the current file and exit always.
:wq {file} Write to {file}. Exit if not editing the last
:wq! {file} Write to {file} and exit always.
:[range]wq[!] [file] Same as above, but only write the lines in [range].
ZZ Write current file, if modified, and exit.
ZQ Quit current file and exit (same as ":q!").
5. Editing a File
:e[dit] Edit the current file. This is useful to re-edit the current file, when it has been changed
outside of Vim.
:e[dit]! Edit the current file always. Discard any changes to the current buffer. This is useful if
you want to start all over again.
:e[dit] {file} Edit {file}.
:e[dit]! {file} Edit {file} always. Discard any changes to the current buffer.
gf Edit the file whose name is under or after the cursor. Mnemonic: "goto file".
Inserting Text
b.sadhiq
262 www.altnix.com
I Insert text before the first non-blank in the line [count] times.
gI Insert text in column 1 [count] times.
o Begin a new line below the cursor and insert text, repeat [count] times.
O Begin a new line above the cursor and insert text, repeat [count] times.
Inserting a file
http://www.catswhocode.com/blog/100-vim-commands-every-programmer-should-know
http://www.fortunecity.com/skyscraper/terminus/435/
http://www.thegeekstuff.com/2010/04/vim-editor-tutorial/
*******************************************************************
Best OF Luck
*********************************************************************************
b.sadhiq
263 www.altnix.com