Linux
Linux
Object 61
Merry Christmas and Happy Holidays to all TGS Readers. To wrap this year, Ive collected 50 UNIX / Linux sysadmin related tutorials that weve posted so far. This is lot of reading. Bookmark this article for your future reference and read it whenever you get free time.
Object 10 Object11 59 58 57 56 55 54 53 52 51 50 49 48 47 46 45 44 43 42 41 40 39 38 37 36 35 34 33 32 31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 9 8 7 6 5 4 3 2 1
1. Disk to disk backup using dd command: dd is a powerful UNIX utility, which is used by the Linux kernel makefiles to make boot images. It can also be used to copy data. This article explains how to backup entire hard disk and create an image of a hard disk using dd command.
Object 64
Data loss will be costly. At the very least, critical data loss will have a financial impact on companies of all sizes. In some cases, it can cost your job. Ive seen cases where sysadmins learned this in the hard way. There are several ways to backup a Linux system, including rsync and rsnapshot that we discussed a while back. This article provides 6 practical examples on using dd command to backup the Linux system. dd is a powerful UNIX utility, which is used by the Linux kernel makefiles to make boot images. It can also be used to copy data. Only superuser can execute dd command.
Warning: While using dd command, if you are not careful, and if you dont know what you are doing, you will lose your data!
if represents inputfile, and of represents output file. So the exact copy of /dev/sda will be available in /dev/sdb. If there are any errors, the above command will fail. If you give the parameter conv=noerror then it will continue to copy if there are read errors. Input file and output file should be mentioned very carefully, if you mention source device in the target and vice versa, you might loss all your data. In the copy of hard drive to hard drive using dd command given below, sync option allows you to copy everything using synchronized I/O.
# dd if=/dev/sda of=/dev/sdb conv=noerror,sync
The above creates the image of a harddisk /dev/hda. Refer our earlier article How to view initrd.image for more details.
The image file hdadisk.img file, is the image of a /dev/hda, so the above command will restore the image of /dev/hda to /dev/hdb.
your target path or image file as shown in the dd command example below.
# dd if=/dev/hda1 of=~/partition1.img
dd command reads one block of input and process it and writes it into an output file. You can specify the block size for input and output file. In the above dd command example, the parameter bs specifies the block size for the both the input and output file. So dd uses 2048bytes as a block size in the above command. Note: If CD is auto mounted, before creating an iso image using dd command, its always good if you unmount the CD device to avoid any unnecessary access to the CD ROM. 2. 15 rsync command examples: Every sysadmin should master the usage of rsync. rsync utility is used to synchronize the files and directories from one location to another. First time, rsync replicates the whole content between the source and destination directories. Next time, rsync transfers only the changed blocks or bytes to the destination location, which makes the transfer really fast.
rsync stands for remote sync. rsync is used to perform the backup operation in UNIX / Linux. rsync utility is used to synchronize the files and directories from one location to another in an effective way. Backup location could be on local server or on remote server.
Syntax
$ rsync options source destination
Source and destination could be either local or remote. In case of remote, specify the login name, remote server name and location.
In the above rsync example: -z is to enable compression -v verbose -r indicates recursive Now let us see the timestamp on one of the files that was copied from source to destination. As you see below, rsync didnt preserve timestamps during sync.
$ ls -l /var/opt/installation/inventory/sva.xml /root/temp/sva.xml -r--r--r-- 1 bin bin 949 Jun 18 2009 /var/opt/installation/inventory/sva.xml -r--r--r-- 1 root bin 949 Sep 2 2009 /root/temp/sva.xml
Now, executing the same command provided in example 1 (But with the rsync option -a) as shown below:
$ rsync -azv /var/opt/installation/inventory/ /root/temp/
building file list ... done ./ sva.xml svB.xml . sent 26499 bytes received 1104 bytes total size is 44867 speedup is 1.63 $
55206.00 bytes/sec
While doing synchronization with the remote server, you need to specify username and ip-address of the remote server. You should also specify the destination directory on the remote server. The format is username@machinename:path As you see above, it asks for password while doing rsync from local to remote server. Sometimes you dont want to enter the password while backing up files from local to remote server. For example, If you have a backup shell script, that copies files from local to remote server using rsync, you need the ability to rsync without having to enter the password. To do that, setup ssh password less login as we explained earlier.
rpm/Basenames . sent 406 bytes received 15810230 bytes total size is 45305958 speedup is 2.87
2432405.54 bytes/sec
$ rsync -avzu [email protected]:/var/lib/rpm /root/temp Password: receiving file list ... done rpm/ sent 122 bytes received 505 bytes 114.00 bytes/sec total size is 45305958 speedup is 72258.31 $ ls -lrt total 39088 -rwxr-xr-x 1 root root
4096 Sep
2 11:35 Basenames
Example 8. Synchronize only the Directory Tree Structure (not the files)
Use rsync -d option to synchronize only directory tree from source to the destination. The below example, synchronize only directory tree in recursive manner, not the files in the directories.
$ rsync -v -d [email protected]:/var/lib/ . Password: receiving file list ... done logrotate.status CAM/ YaST2/ acpi/ sent 240 bytes received 1830 bytes 318.46 bytes/sec
speedup is 0.46
You can also use rsnapshot utility (that uses rsync) to backup local linux server, or backup remote linux server.
Target has the new file called new-file.txt, when synchronize with the source with delete option, it removed the file new-file.txt
If you see the above output, it didnt receive the new file new-file.txt
At the destination:
$ ls -l /root/temp -rw-r--r-- 1 root root -rw-r--r-- 1 bin bin -rw-r--r-- 1 root root 12288 May 28 2008 Conflictname 1179648 Jun 24 05:27 Dirnames 0 Sep 3 06:39 Basenames
In the above example, between the source and destination, there are two differences. First, owner and group of the file Dirname differs. Next, size differs for the file Basenames. Now let us see how rsync displays this difference. -i option displays the item changes.
$ rsync -avzi [email protected]:/var/lib/rpm/ /root/temp/ Password: receiving file list ... done >f.st.... Basenames .f....og. Dirnames sent 48 bytes received 2182544 bytes 291012.27 bytes/sec total size is 45305958 speedup is 20.76
In the output it displays some 9 letters in front of the file name or directory name indicating the changes. In our example, the letters in front of the Basenames (and Dirnames) says the following:
> f s t o g specifies that a file is being transferred to the local host. represents that it is a file. represents size changes are there. represents timestamp changes are there. owner changed group changed.
$ rsync -avz --include 'P*' --exclude '*' [email protected]:/var/lib/rpm/ /root/temp/ Password: receiving file list ... done ./ Packages Providename Provideversion Pubkeys sent 129 bytes received 10286798 bytes total size is 32768000 speedup is 3.19 2285983.78 bytes/sec
In the above example, it includes only the files or directories starting with P (using rsync include) and excludes all other files. (using rsync exclude * )
max-size=100K makes rsync to transfer only the files that are less than or equal to 100K. You can indicate M for megabytes and G for gigabytes.
sent 406 bytes received 15810211 bytes total size is 45305958 speedup is 2.87
2874657.64 bytes/sec
3. Three sysadmin rules: If you are a sysadmin, you cant (and shouldnt) break these three sysadmin rules.
Object 71
Object 70
When I drafted this article, I really came-up with 7 sysadmin habits. But, out of those 7 habits, three really stood out for me. While habits are good, sometimes rules might even be better, especially in the sysadmin world, when handling a production environment.
Apart from full-backup, do you also need regular incremental-backup? How would you execute your backup? i.e Using crontab or some other schedulers? If you dont have a backup of your critical systems, stop reading this article and get back to work. Start planning for your backup immediately. A while back in one of the research conducted by some group (dont remember who did that), I remember they mentioned that only 70% of the production applications are getting backed-up. Out of those, 30% of the backups are invalid or corrupted. Assume that Sam takes backup of the critical applications regularly, but doesnt validate his backup. However, Jack doesnt even bother to take any backup of his critical applications. It might sound like Sam who has a backup is in much better shape than Jack who doesnt even have a backup. In my opinion, both Sam and Jack are in the same situation, as Sam never validated his backup to make sure it can be restored when there is a disater. If you are a sysadmin and dont want to follow this golden rule#1 (or like to break this rule), you should seriously consider quitting sysadmin job and become a developer.
Rule #2: Master the Command Line ( and avoid the UI if possible )
There is not a single task on a Unix / Linux server, that you cannot perform from command line. While there are some user interface available to make some of the sysadmin task easy, you really dont need them and should be using command line all the time. So, if you are a Linux sysadmin, you should master the command line. On any system, if you want to be very fluent and productive, you should master the command line. The main difference between a Windows sysadmin and Linux sysadmin is GUI Vs Command line. Windows sysadmin are not very comfortable with command line. Linux sysadmin should be very comfortable with command line. Even when you have a UI to do certain task, you should still prefer command line, as you would understand how a particular service works, if you do it from the command line. In lot of production server environment, sysadmins typically uninstall all GUI related services and tools. If you are Unix / Linux sysadmin and dont want to follow this rule, probably there is a deep desire inside you to become a Windows sysadmin.
On Linux, you can setup disk quota using one of the following methods: File system base disk quota allocation User or group based disk quota allocation On the user or group based quota, following are three important factors to consider: Hard limit For example, if you specify 2GB as hard limit, user will not be able to create new files after 2GB Soft limit For example, if you specify 1GB as soft limit, user will get a warning message disk quota exceeded, once they reach 1GB limit. But, theyll still be able to create new files until they reach the hard limit Grace Period For example, if you specify 10 days as a grace period, after user reach their hard limit, they would be allowed additional 10 days to create new files. In that time period, they should try to get back to the quota limit.
# quotacheck -avug quotacheck: Scanning /dev/sda3 [/home] done quotacheck: Checked 5182 directories and 31566 files quotacheck: Old file not found. quotacheck: Old file not found.
In the above command: a: Check all quota-enabled filesystem v: Verbose mode u: Check for user disk quota g: Check for group disk quota
The above command will create a aquota file for user and group under the filesystem directory as shown below.
# ls -l /home/ -rw-------rw------1 root 1 root root root 11264 Jun 21 14:49 aquota.user 11264 Jun 21 14:49 aquota.group
Once the edquota command opens the quota settings for the specific user in a editor, you can set the following limits: soft and hard limit for disk quota size for the particular user. soft and hard limit for the total number of inodes that are allowed for the particular user.
4. Report the disk quota usage for users and group using repquota
Use the repquota command as shown below to report the disk quota usage for the users and groups.
# repquota /home *** Report for user quotas on device /dev/sda3 Block grace time: 7days; Inode grace time: 7days Block limits File limits User used soft hard grace used soft hard grace ---------------------------------------------------------------------root -- 566488 0 0 5401 0 0 nobody -1448 0 0 30 0 0 ramesh -- 1419352 0 0 1686 0 0 john -26604 0 0 172 0 0
5. Troubleshoot using dmesg: Using dmesg you can view boot up messages that displays information about the hardware devices that the kernel detects during boot process. This can be helpful during troubleshooting process.
Object 77
Object 76
During system bootup process, kernel gets loaded into the memory and it controls the entire system. When the system boots up, it prints number of messages on the screen that displays information about the hardware devices that the kernel detects during boot process. These messages are available in kernel ring buffer and whenever the new message comes the old message gets overwritten. You could see all those messages after the system bootup using the dmesg command.
As we discussed earlier, you can also view hardware information using dmidecode.
# dmesg | grep eth eth0: Broadcom NetXtreme II BCM5709 1000Base-T (C0) PCI Express found at mem 96000000, IRQ 169, node addr e4:1f:13:62:ff:58 eth1: Broadcom NetXtreme II BCM5709 1000Base-T (C0) PCI Express found at mem 98000000, IRQ 114, node addr e4:1f:13:62:ff:5a eth0: Link up
6. RPM package management examples: 15 examples provided in this article explains everything you need to know about managing RPM packages on redhat based system (including CentOS).
Object 80
Object 79
RPM command is used for installing, uninstalling, upgrading, querying, listing, and checking RPM packages on your Linux system. RPM stands for Red Hat Package Manager. With root privilege, you can use the rpm command with appropriate options to manage the RPM software packages. In this article, let us review 15 practical examples of rpm command. Let us take an rpm of Mysql Client and run through all our examples.
When you install a RPM, it checks whether your system is suitable for the software the RPM package contains, figures out where to install the files located inside the rpm package, installs them on your system, and adds that piece of software into its database of installed RPM packages. The following rpm command installs Mysql client package.
# rpm -ivh MySQL-client-3.23.57-1.i386.rpm Preparing... ########################################### [100%] 1:MySQL-client ########################################### [100%]
rpm command and options -i : install a package -v : verbose -h : print hash marks as the package archive is unpacked. You can also use dpkg on Debian, pkgadd on Solaris, depot on HP-UX to install packages.
-q query operation -a queries all installed packages To identify whether a particular rpm package is installed on your system, combine rpm and grep command as shown below. Following command checks whether cdrecord package is installed on your system.
# rpm -qa | grep 'cdrecord'
Note: To query a package, you should specify the exact package name. If the package name is incorrect, then rpm command will report that the package is not installed.
5. Which RPM package does a file belong to? Use rpm -qf
Let us say, you have list of files and you would want to know which package owns all these files. rpm command has options to achieve this. The following example shows that /usr/bin/mysqlaccess file is part of the MySQL-client-3.23.57-1 rpm.
# rpm -qf /usr/bin/mysqlaccess MySQL-client-3.23.57-1
-f : file name
-d : refers documentation.
If you have an RPM file that you would like to install, but want to know more information about it before installing, you can do the following:
# rpm -qip MySQL-client-3.23.57-1.i386.rpm Name : MySQL-client Relocations: (not relocatable) Version : 3.23.57 Vendor: MySQL AB Release : 1 Build Date: Mon 09 Jun 2003 11:08:28 PM CEST Install Date: (not installed) Build Host: build.mysql.com Group : Applications/Databases Source RPM: MySQL-3.23.57-1.src.rpm Size : 5305109 License: GPL / LGPL Signature : (none) Packager : Lenz Grimmer URL : http://www.mysql.com/ Summary : MySQL - Client Description : This package contains the standard MySQL clients.
q : query the rpm file l : list the files in the package p : specify the package name You can also extract files from RPM package using rpm2cpio as we discussed earlier.
10. Find out the state of files in a package using rpm -qsp
The following command is to find state (installed, replaced or normal) for all the files in a RPM package.
# rpm -qsp MySQL-client-3.23.57-1.i386.rpm normal /usr/bin/msql2mysql normal /usr/bin/mysql normal /usr/bin/mysql_find_rows normal /usr/bin/mysqlaccess normal /usr/bin/mysqladmin normal /usr/bin/mysqlbinlog normal /usr/bin/mysqlcheck normal /usr/bin/mysqldump normal /usr/bin/mysqlimport normal /usr/bin/mysqlshow normal /usr/share/man/man1/mysql.1.gz normal /usr/share/man/man1/mysqlaccess.1.gz normal /usr/share/man/man1/mysqladmin.1.gz normal /usr/share/man/man1/mysqldump.1.gz normal /usr/share/man/man1/mysqlshow.1.gz
The character in the above output denotes the following: S file Size differs M Mode differs (includes permissions and file type) 5 MD5 sum differs
D Device major/minor number mismatch L readlink(2) path mismatch U User ownership differs G Group ownership differs T mTime differs
7. 10 netstat examples: Netstat command displays various network related information such as network connections, routing tables, interface statistics, masquerade connections, multicast memberships etc.,
Object 81
Object 83
Object 82
Netstat command displays various network related information such as network connections, routing tables, interface statistics, masquerade connections, multicast memberships etc., In this article, let us review 10 practical unix netstat command examples.
Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address tcp 0 0 localhost:30037 tcp 0 0 *:smtp tcp6 0 0 localhost:ipp
Show statistics for TCP (or) UDP ports using netstat -st (or) -su
# netstat -st # netstat -su
Name to the netstat output. This is very useful while debugging to identify which program is running on a particular port.
# netstat -pt Active Internet connections (w/o servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 1 0 ramesh-laptop.loc:47212 192.168.185.75:www CLOSE_WAIT 2109/firefox tcp 0 0 ramesh-laptop.loc:52750 lax:www ESTABLISHED 2109/firefox
If you dont want only any one of those three items ( ports, or hosts, or users ) to be resolved, use following commands.
# netsat -a --numeric-ports # netsat -a --numeric-hosts # netsat -a --numeric-users
link-local default
* 192.168.1.1
255.255.0.0 0.0.0.0
U UG
0 0 0 0
0 eth2 0 eth2
Note: Use netstat -rn to display routes in numeric format without resolving for host-names.
CLOSE_WAIT CLOSE_WAIT
Display extended information on the interfaces (similar to ifconfig) using netstat -ie:
# netstat -ie Kernel Interface table eth0 Link encap:Ethernet HWaddr 00:10:40:11:11:11 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Memory:f6ae0000-f6b00000
8. Manage packages using apt-* commands: These 13 practical examples explains how to manage packages using apt-get, apt-cache, apt-file and dpkg commands.
How To Manage Packages Using apt-get, apt-cache, apt-file and dpkg Commands ( With 13 Practical Examples )
by Ramesh Natarajan on October 14, 2009
Object 84 Object 86 Object 85
Debian based systems (including Ubuntu) uses apt-* commands for managing packages from the command line. In this article, using Apache 2 installation as an example, let us review how to use apt-* commands to view, install, remove, or upgrade packages.
$ apt-file list apache2 | more apache2: /usr/share/bug/apache2/control apache2: /usr/share/bug/apache2/script apache2: /usr/share/doc/apache2/NEWS.Debian.gz apache2: /usr/share/doc/apache2/README.Debian.gz apache2: /usr/share/doc/apache2/changelog.Debian.gz ...
$ sudo apt-get remove apache2 The following packages were automatically installed and are no longer required: apache2-utils linux-headers-2.6.28-11 libapr1 apache2.2-common linux-headers-2.6.28-11-generic apache2-mpm-worker libpq5 libaprutil1 Use 'apt-get autoremove' to remove them. The following packages will be REMOVED: apache2 0 upgraded, 0 newly installed, 1 to remove and 26 not upgraded. Removing apache2 ...
apt-get remove will not delete the configuration files of the package apt-get purge will delete the configuration files of the package
9. Modprobe command examples: modprobe utility is used to add loadable modules to the Linux kernel. You can also view and remove modules using modprobe command.
modprobe utility is used to add loadable modules to the Linux kernel. You can also view and remove modules using modprobe command. Linux maintains /lib/modules/$(uname-r) directory for modules and its configuration files (except /etc/modprobe.conf and /etc/modprobe.d).
In Linux kernel 2.6, the .ko modules are used instead of .o files since that has additional information that the kernel uses to load the modules. The example in this article are done with using modprobe on Ubuntu.
The module files are with .ko extension. If you like to know the full file location of a specific Linux kernel module, use modprobe command and do a grep of the module name as shown below.
$ modprobe | grep vmhgfs misc/vmhgfs.ko $ cd /lib/modules/2.6.31-14-generic/misc $ ls vmhgfs* vmhgfs.ko
Note: You can also use insmod for installing new modules into the Linux kernel.
10.Ethtool examples: Ethtool utility is used to view and change the ethernet device parameters. These examples will explain how you can manipulate your ethernet NIC card using ethtool.
Object 92
Object 91
Ethtool utility is used to view and change the ethernet device parameters.
Supported ports: [ TP ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Supports auto-negotiation: Yes Advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Advertised auto-negotiation: Yes Speed: 100Mb/s Duplex: Full Port: Twisted Pair PHYAD: 1 Transceiver: internal Auto-negotiation: on Supports Wake-on: d Wake-on: d Link detected: yes
This above ethtool output displays ethernet card properties such as speed, wake on, duplex and the link detection status. Following are the three types of duplexes available. Full duplex : Enables sending and receiving of packets at the same time. This mode is used when the ethernet device is connected to a switch. Half duplex : Enables either sending or receiving of packets at a single point of time. This mode is used when the ethernet device is connected to a hub. Auto-negotiation : If enabled, the ethernet device itself decides whether to use either full duplex or half duplex based on the network the ethernet device attached to.
# ethtool eth0 Settings for eth0: Supported ports: [ TP ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Supports auto-negotiation: Yes Advertised link modes: Not reported Advertised auto-negotiation: No Speed: Unknown! (65535) Duplex: Unknown! (255) Port: Twisted Pair PHYAD: 1 Transceiver: internal Auto-negotiation: off Supports Wake-on: g Wake-on: g Link detected: no # ifup eth0
After the above change, you could see that the link detection value changed to down and autonegotiation is in off state.
Once you change the speed when the adapter is online, it automatically goes offline, and you need to bring it back online using ifup command.
# ifup eth0 eth0 device: Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet (rev 20) eth0 configuration: eth-bus-pci-0000:0b:00.0 Checking for network time protocol daemon (NTPD): running # ethtool eth0 Settings for eth0: Supported ports: [ TP ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Supports auto-negotiation: Yes Advertised link modes: Not reported Advertised auto-negotiation: No Speed: 100Mb/s Duplex: Full Port: Twisted Pair PHYAD: 1 Transceiver: internal Auto-negotiation: off Supports Wake-on: g Wake-on: g Link detected: yes
As shown in the above output, the speed changed from 1000Mb/s to 100Mb/s and auto-negotiation parameter is unset. To change the Maximum Transmission Unit (MTU), refer to our ifconfig examples article.
tx_1024_to_1522_byte_packets: 71644887 tx_1523_to_9022_byte_packets: 0 rx_xon_frames: 0 rx_xoff_frames: 0 tx_xon_frames: 0 tx_xoff_frames: 0 rx_mac_ctrl_frames: 0 rx_filtered_packets: 14596600 rx_discards: 0 rx_fw_discards: 0
8. Identify Specific Device From Multiple Devices (Blink LED Port of NIC Card)
Let us assume that you have a machine with four ethernet adapters, and you want to identify the physical port of a particular ethernet card. (For example, eth0). Use ethtool option -p, which will make the corresponding LED of physical port to blink.
# ethtool -p eth0
The above line should be the last line of the file. This will change speed, duplex and autoneg of eth2 device permanently. On SUSE, modify the /etc/sysconfig/network/ifcfg-eth-id file and include a new script using POST_UP_SCRIPT variable as shown below. Include the below line as the last line in the corresponding eth1 adpater config file.
# vim /etc/sysconfig/network/ifcfg-eth-id POST_UP_SCRIPT='eth1'
Then, create a new file scripts/eth1 as shown below under /etc/sysconfig/network directory. Make sure that the script has execute permission and ensure that the ethtool utility is present under /sbin directory.
# cd /etc/sysconfig/network/ # vim scripts/eth1 #!/bin/bash /sbin/ethtool -s duplex full speed 100 autoneg off
11.NFS mount using exportfs: This is a linux beginners guide to NFS mount using exportfs. This explains how to export a file system to a remote machine and mount it both temporarily and permanently.
Object 95
Object 94
Using NFS (Network File System), you can mount a disk partition of a remote machine as if it is a local disk. This article explains how to export a file system to a remote machine and mount it both temporarily and permanently.
REMOTEIP IP of the remote server to which you want to export. : delimiter PATH Path of directory that you want to export.
Explanation REMOTEIP IP of the remote server which exported the file system : delimeter PATH Path of directory which you want to export.
After unexporting, check to make sure it is not available for NFS mount as shown below.
# exportfs
12.Change timezone: Depending on your Linux distribution, use one of the methods explained in this article to change the timezone on your system.
Object 98
Object 97
Question: When I installed the Linux OS, I forgot to set the proper timezone. How do I change the timezone on my Linux distribution. I use CentOS (Red Hat Linux). But, can you please explain me how to do this on all Linux distributions with some clear examples. Answer: Use one of the following methods to change the timezone on your Linux system. One of these methods should work for you depending on the Linux distribution you are using.
On some distributions (for example, CentOS), the timezone is controlled by /etc/localtime file. Delete the current localtime file under /etc/ directory
# cd /etc # rm localtime
All US timezones are located under under the /usr/share/zoneinfo/US directory as shown below.
# ls /usr/share/zoneinfo/US/ Alaska Arizona Pacific Aleutian Central Samoa Eastern East-Indiana Hawaii Indiana-Starke Michigan Mountain
Note: For other country timezones, browse the /usr/share/zoneinfo directory Link the Pacific file from the above US directory to the /etc/localtime directory as shown below.
# cd /etc # ln -s /usr/share/zoneinfo/US/Pacific localtime
Now the timezone on your Linux system is changed to US Pacific time as shown below.
# date Mon Sep 17 23:10:14 PDT 2010
To change this to US Pacific time (Los Angeles), modify the /etc/timezone file as shown below.
# vim /etc/timezone America/Los_Angeles
Also, set the timezone from the command line using the TZ variable.
# export TZ=America/Los_Angeles
13.Install phpMyAdmin: phpMyAdmin is a web-based tool written in PHP to manage the MySQL database. Apart from viewing the tables (and other db objects), you can perform lot of DBA functions through the web based interface. You can also execute any SQL query from the UI.
Object 101
Object 100
Do you have a MySQL database in your environment? Did you know that the easy (and most effective) way to manage MySQL database is using phpMyAdmin? phpMyAdmin is a web-based tool written in PHP to manage the MySQL database. Apart from viewing the tables (and other db objects), you can perform lot of DBA functions through the web based interface. You can also execute any SQL query from the UI. This article will provide step-by-step instructions on how to install and configure phpMyAdmin on Linux distributions.
Make sure Apache is installed and running. PHP5 Modules If you dont have PHP, I recommend that you install PHP from source. Following is the configure command I executed while installing PHP from source. This includes all the required PHP modules for phpMyAdmin. 14.Setup squid to control internet access: Squid is a proxy caching server. You can use squid to control internet access at work. This guide will give a jump-start on how to setup squid on Linux to restrict internet access in an network.
Squid is a proxy caching server. If you are Linux sysadmin, you can use squid to control internet access at your work environment. This beginners guide will give a jump-start on how to setup squid on Linux to restrict internet access in an network.
Install Squid
You should install the following three squid related packages on your system. squid squid-common
squid-langpack On Debian and Ubuntu, use aptitude to install squid as shown below. On CentOS, use yum to install the squid package.
$ sudo aptitude install squid
Note: The http port number (3128) specified in the squid.conf should be entered in the proxy setting section in the client browser. If squid is built with SSL, you can use https_port option inside squid.conf to define https squid.
Squid maintains three log files (access.log, cache.log and store.log) under /var/log/squid directory. From the /var/log/squid/access.log, you can view who accessed which website at what time. Following is the format of the squid access.log record.
time elapsed remotehost code/status bytes method URL rfc931 peerstatus/peerhost
To disable logging in squid, update the squid.conf with the following information.
# to disable access.log cache_access_log /dev/null # to disable store.log cache_store_log none # to disable cache.log cache_log /dev/null
Note: You can also configure squid as a transparent proxy server, which well discuss in a separate article. Also, refer to our earlier article on how to block ip-address using fail2ban and iptables.
For a Linux based intrusion detection system, refer to our tripwire article.
Modify the squid.conf to block any sites that has any of these keywords in their url.
# vim /etc/squid/squid.conf acl blocked_sites url_regex -i "/etc/squid/blocked_sites" http_access deny blocked_sites http_access allow all
In the above example, -i option is used for ignoring case for matching. So, while accessing the websites, squid will try to match the url with any of the pattern mentioned in the above blocked_sites file and denies the access when it matches.
Note: Add the sarg-report to the crontab. The reports generated by sarg are stored under /var/www/squid-reports. These are html reports can you can view from a browser.
$ ls /var/www/squid-reports Daily index.hyml $ ls /var/www/squid-reports/Daily 2010Aug28-2010Aug28 images index.html
15.Add new swap space: Use dd, mkswap and swapon commands to add swap space. You can either use a dedicated hard drive partition to add new swap space, or create a swap file on an existing filesystem and use it as swap space.
UNIX / Linux: 2 Ways to Add Swap Space Using dd, mkswap and swapon
by Ramesh Natarajan on August 18, 2010
Object 105 Object 107 Object 106
Question: I would like to add more swap space to my Linux system. Can you explain with clear examples on how to increase the swap space? Answer: You can either use a dedicated hard drive partition to add new swap space, or create a swap file on an existing filesystem and use it as swap space.
Swapon command with option -s, displays the current swap space in KB.
# swapon -s Filename /dev/sda2 Type partition Size Used 4192956 0 Priority -1
Enable the swap partition for usage using swapon command as shown below.
# swapon /dev/sdc1
To make this swap space partition available even after the reboot, add the following line to the /etc/fstab file.
# cat /etc/fstab /dev/sdc1 swap swap defaults 0 0
Verify whether the newly created swap area is available for your use.
# swapon -s Filename /dev/sda2 /dev/sdc1 # free -k total Mem: 3082356 -/+ buffers/cache: Swap: 5241524 used 3022364 323836 0 Type partition partition free 59992 2758520 5241524 Size Used 4192956 0 1048568 0 shared 0 buffers 52056 Priority -1 -2 cached 2646472
Note: In the output of swapon -s command, the Type column will say partition if the swap space is created from a disk partition.
Change the permission of the swap file so that only root can access it.
# chmod 600 /root/myswapfile
To make this swap file available as a swap area even after the reboot, add the following line to the /etc/fstab file.
# cat /etc/fstab /root/myswapfile 0 swap swap defaults 0
Verify whether the newly created swap area is available for your use.
# swapon -s Filename /dev/sda2 /root/myswapfile # free -k total Mem: 3082356 -/+ buffers/cache: Swap: 5241524 used 3022364 323836 0 Type partition file free 59992 2758520 5241524 Size Used 4192956 0 1048568 0 shared 0 buffers 52056 Priority -1 -2 cached 2646472
Note: In the output of swapon -s command, the Type column will say file if the swap space is created from a swap file. If you dont want to reboot to verify whether the system takes all the swap space mentioned in the /etc/fstab, you can do the following, which will disable and enable all the swap partition mentioned in the /etc/fstab
# swapoff -a # swapon -a
16.Install and configure snort: Snort is a free lightweight network intrusion detection system for both UNIX and Windows. This article explains how to install snort from source, write rules, and perform basic testing.
UNIX and Windows. In this article, let us review how to install snort from source, write rules, and perform basic testing.
Note: We also discussed earlier about Tripwire (Linux host based intrusion detection system) and Fail2ban (Intrusion prevention framework)
2. Install Snort
Before installing snort, make sure you have dev packages of libpcap and libpcre.
# apt-cache policy libpcap0.8-dev libpcap0.8-dev: Installed: 1.0.0-2ubuntu1 Candidate: 1.0.0-2ubuntu1 # apt-cache policy libpcre3-dev libpcre3-dev: Installed: 7.8-3 Candidate: 7.8-3
The above basic rule does alerting when there is an ICMP packet (ping). Following is the structure of the alert:
<Rule Actions> <Protocol> <Source IP Address> <Source Port> <Direction Operator> <Destination IP Address> <Destination > (rule options)
Table: Rule structure and example Structure Example Rule Actions alert Protocol icmp Source IP Address any Source Port any Direction Operator -> Destination IP Address any Destination Port any (msg:ICMP Packet; sid:477; (rule options) rev:3;)
5. Execute snort
Execute snort from command line, as mentioned below.
# snort -c /etc/snort/snort.conf -l /var/log/snort/
Try pinging some IP from your machine, to check our ping rule. Following is the example of a snort alert for this ICMP rule.
# head /var/log/snort/alert [**] [1:477:3] ICMP Packet [**] [Priority: 0] 07/27-20:41:57.230345 > l/l len: 0 l/l type: 0x200 0:0:0:0:0:0 pkt type:0x4 proto: 0x800 len:0x64 209.85.231.102 -> 209.85.231.104 ICMP TTL:64 TOS:0x0 ID:0 IpLen:20 DgmLen:84 DF Type:8 Code:0 ID:24905 Seq:1 ECHO
Alert Explanation A couple of lines are added for each alert, which includes the following: Message is printed in the first line. Source IP Destination IP Type of packet, and header information.
If you have a different interface for the network connection, then use -dev -i option. In this example my network interface is ppp0.
# snort -dev -i ppp0 -c /etc/snort/snort.conf -l /var/log/snort/
Question: I have purchased Linux support for RHEL and OEL from Oracle corporation. How do I register my Linux system to Oracle support network to download and update packages? Can you explain me with step-by-step instruction? Answer: After purchasing Linux support from Oracle, you should register your Linux system with Oracles Unbreakable Linux Network using up2date utility as explained in this article.
18.tftpboot setup: You can install Linux from network using PXE by installing and configuring tftpboot server as explained here.
HowTo: 10 Steps to Configure tftpboot Server in UNIX / Linux (For installing Linux from Network using PXE)
by Balakrishnan Mariyappan on July 22, 2010
Object 114
Object 116
Object 115
In this article, let us discuss about how to setup tftpboot, including installation of necessary packages, and tftpboot configurations. TFTP boot service is primarily used to perform OS installation on a remote machine for which you dont have the physical access. In order to perform the OS installation successfully, there should be a way to reboot the remote server either using wakeonlan or someone manually rebooting it or some other ways. In those scenarios, you can setup the tftpboot services accordingly and the OS installation can be done remotely (you need to have the autoyast configuration file to automate the OS installation steps). Step by step procedure is presented in this article for the SLES10-SP3 in 64bit architecture. However, these steps are pretty much similar to any other Linux distributions.
Required Packages
The following packages needs to be installed for the tftpboot setup. dhcp services packages: dhcp-3.0.7-7.5.20.x86_64.rpm and dhcp-server-3.0.77.5.20.x86_64.rpm tftpboot package: tftp-0.48-1.6.x86_64.rpm pxeboot package: syslinux-3.11-20.14.26.x86_64.rpm
Package Installation
Install the packages for the dhcp server services:
$ rpm -ivh dhcp-3.0.7-7.5.20.x86_64.rpm Preparing... ########################################### [100%] 1:dhcp ########################################### [100%] $ rpm -ivh dhcp-server-3.0.7-7.5.20.x86_64.rpm Preparing... ########################################### [100%] 1:dhcp ########################################### [100%] $ rpm -ivh tftp-0.48-1.6.x86_64.rpm $ rpm -ivh syslinux-3.11-20.14.26.x86_64.rpm
After installing the syslinux package, pxelinux.0 file will be created under /usr/share/pxelinux/ directory. This is required to load install kernel and initrd images on the client machine. Verify that the packages are successfully installed.
$ rpm -qa | grep dhcp $ rpm -qa | grep tftp
Download the appropriate tftpserver from the repository of your respective Linux distribution.
Step 3: Create the mount point for ISO and mount the ISO image
Let us assume that we are going to install the SLES10 SP3 Linux distribution on a remote server. If you have the SUSE10-SP3 DVD insert it in the drive or mount the ISO image which you have. Here, the iso image has been mounted as follows:
# mkdir /tftpboot/sles10_sp3 # mount -o loop SLES-10-SP3-DVD-x86_64.iso /tftpboot/sles10_sp3
Refer to our earlier article on How to mount and view ISO files.
The following options are used for, kernel specifies where to find the Linux install kernel on the TFTP server. install specifies boot arguments to pass to the install kernel. As per the entries above, the nfs install mode is used for serving install RPMs and configuration files. So, have the nfs setup in this machine with the /tftpboot directory in the exported list. You can add the autoyast option with the autoyast configuration file to automate the OS installation steps otherwise you need to do run through the installation steps manually.
Specify the interface in /etc/syslinux/dhcpd to listen dhcp requests coming from clients.
# cat /etc/syslinux/dhcpd | grep DHCPD_INTERFACE DHCPD_INTERFACE=eth1;
Here, this machine has the ip address of 192.168.1.101 on the eth1 device. So, specify eth1 for the DHCPD_INTERFACE as shown above.
On a related note, refer to our earlier article about 7 examples to configure network interface using ifconfig.
After restarting the nfs services, you can view the exported directory list(/tftpboot) by the following command,
# showmount -e
Finally, the tftpboot setup is ready and now the client machine can be booted after changing the first boot device as network in the BIOS settings. If you encounter any tftp error, you can do the troubleshooting by retrieving some files through tftpd service. Retrieve some file from the tftpserver to make sure tftp service is working properly using the tftp client. Let us that assume that sample.txt file is present under /tftpboot directory.
$ tftp -v 192.168.1.101 -c get sample.txt
19.Delete all iptables rules: When you are starting to setup iptables, you might want to delete (flush) all the existing iptables as shown here.
Object 119
Object 118
Question: How do I view all the current iptables rules? Once I view it, is there a way to delete all the current rules and start from scratch? Answer: Use the iptables list option to view, and iptables flush option to delete all the rules as shown below. You should have root permission to perform this operation.
The above output shows chain headers. As you see, there are no rules in it.
icmp echo-request
After doing this, your iptables will become empty, and the iptables list output will look like what is shown in the example 1. You can also delete (flush) a particular iptable chain by giving the chain name as an argument as shown below.
# iptables --flush OUTPUT
20.Disable ping replies: Someone can flood the network with ping -f. If ping reply is disabled as explained here we can avoid this flooding.
Object 122
Object 121
You may want to disable ping replies for many reasons, may be for a security reason, or to avoid network congestion. Someone can flood the network with ping -f as shown in Ping Example 5 in our earlier Ping Tutorial article. If ping reply is disabled we can avoid this flooding.
Please note that this setting will be erased after the reboot. To disable ping reply permanently (even after the reboot), follow the step mentioned below. Also, to enable the ping reply back, set the value to 0 as shown below.
# echo "0" > /proc/sys/net/ipv4/icmp_echo_ignore_all
The above command loads the sysctl settings from the sysctl.conf file. After the ping reply is disabled using one of the above method, when somebody tries to ping your machine they will end up waiting without getting a ping reply packet even when the machine is up and running.
21.Block ip address using fail2ban: Fail2ban is a intrusion preventon framework that scans log files for various services ( SSH, FTP, SMTP, Apache, etc., ) and bans the IP that makes too many password failures. It also updates iptles firewall rules to reject these ip addresses.
Object 125
Object 124
Fail2ban scans log files for various services ( SSH, FTP, SMTP, Apache, etc., ) and bans the IP that makes too many password failures. It also updates the firewall rules to reject these ip addresses. Fail2ban is an intrusion prevention framework written in the Python programming language. Main purpose of Fail2ban is to prevent brute force login attacks. Also, refer to our earlier article on Tripwire (Linux host based intrusion detection system).
Install Fail2ban
To install fail2ban from source, download it from sourceforge.. Use apt-get to install Fail2ban on a Debian based system as shown below.
# apt-get install fail2ban
You can also install Fail2ban manually by downloading the fail2ban deb package.
# dpkg -i fail2ban_0.8.1-1_all.deb
/etc/fail2ban/fail2ban.conf
Main purpose of this file is to configure fail2ban log related directives. Loglevel: Set the log level output. logtarget : Specify the log file path Actions taken by the Fail2ban are logged in the /var/log/fail2ban.log file. You can change the verbosity in the conf file to one of: 1 ERROR, 2 WARN, 3 INFO or 4 DEBUG.
/etc/fail2ban/jail.conf
jail.conf file contains the declaration of the service configurations. This configuration file is broken up into different contexts. The DEFAULT settings apply to all sections. The following DEFAULT section of jail.conf says that after five failed access attempts from a single IP address within 600 seconds or 10 minutes (findtime), that address will be automatically blocked for 600 seconds (bantime).
[DEFAULT] ignoreip = 127.0.0.1 maxretry = 5 findtime = 600 bantime = 600
ignoreip: This is a space-separated list of IP addresses that cannot be blocked by fail2ban. maxretry: Maximum number of failed login attempts before a host is blocked by fail2ban. bantime: Time in seconds that a host is blocked if it was caught by fail2ban (600 seconds = 10 minutes).
Service Configurations
By default, some services are inserted as templates. Following is an example of the ssh services section.
[ssh] enabled = true port = ssh filter = sshd logpath = /var/log/auth.log action = iptables
enabled : Enable the fail2ban checking for ssh service port: service port ( referred in /etc/services file ) filter: Name of the filter to be used by the service to detect matches. This name corresponds to a file name in /etc/fail2ban/filter.d; without the .conf extension. For example: filter = sshd refers to /etc/fail2ban/filter.d/sshd.conf. logpath: The log file that fail2ban checks for failed login attempts. Action: This option tells fail2ban which action to take once a filter matches. This name corresponds to a file name in /etc/fail2ban/action.d/ without the .conf extension. For example: action = iptables refers to /etc/fail2ban/action.d/iptables.conf. Fail2ban will monitor the /var/log/auth.log file for failed access attempts, and if it finds repeated failed ssh login attempts from the same IP address or host, fail2ban stops further login attempts from that IP address/host by blocking it with fail2ban iptables firewall rule.
Fail2ban Filters
The directory /etc/fail2ban/filter.d contains regular expressions that are used to detect break-in attempts, password failures, etc., for various services. For example: sshd.conf Fail2ban ssh related filters apache-auth.conf Fail2ban apache service filters We can also add our own regular expression to find unwanted action.
Fail2ban Actions
The directory /etc/fail2ban/action.d contains different scripts defining actions which will execute once a filter matches. Only one filter is allowed per service, but it is possible to specify several actions, on separate lines. For example: IPtables.conf block & unblock IP address Mail.conf Sending mail to configured user
22.Package management using dpkg: On debian, you can install or remove deb packages using dpkg utility.
Question: I would like to know how to install, uninstall, verify deb packages on Debian. Can you explain me with an example? Answer: Use dpkg to install and remove a deb package as explained below. On Debian, dpkg (Debian package system) allows you to install and remove the software packages. dpkg is the simplest way to install and uninstall a package. Debian now supplies a tool named Apt (for A Package Tool) and aptitude to help the administrators to add or remove software more easily. Refer to our earlier Manage packages using apt-get for more details.
The following example installs the Debian package for tcl tool.
$ dpkg -i tcl8.4_8.4.19-2_amd64.deb Selecting previously deselected package tcl8.4. (Reading database ... 94692 files and directories currently installed.) Unpacking tcl8.4 (from tcl8.4_8.4.19-2_amd64.deb) ... Setting up tcl8.4 (8.4.19-2) ...
Processing triggers for menu ... Processing triggers for man-db ...
You can verify the installation of package using dpkg -l packagename as shown below.
$ dpkg -l | grep 'tcl' ii tcl8.4 Tool Command Language) v8.4 - run-t 8.4.19-2 Tcl (the
The above command shows that tcl package is installed properly. ii specifies status installed ok installed.
rc stands for removed ok config-files. The remove action didnt purge the configuration files. The status of each installed package will be available in /var/lib/dpkg/status. Status of tcl8.4 package looks like,
Package: tcl8.4 Status: deinstall ok config-files Priority: optional Section: interpreters Installed-Size: 3308
So the package is completely removed, and the status in the /var/lib/dpkg/status is given below.
Package: tcl8.4 Status: purge ok not-installed Priority: optional Section: interpreters
23.Alfresco content management system: Alfresco is the best open source content management system. Everything you need to know to install and configure Alfresco is explained here.
Alfresco is the best open source content management system. This has a rock solid document management foundation, with several functionality built on top of it. Alfresco provides web based content management, collaboration platform, Content Management Interoperability Services (CMIS), records management and image management. Alfresco has enterprise edition and free community edition. See the difference between them here. If you have an in-house IT team, just go with the Alfresco community edition. It is straight-forward to install and configure Alfresco. In this article, let us review how to install and configure alfresco community edition on UNIX / Linux platform using 12 easy steps.
connector-java-5.1.7-bin.jar
While the alfresco tomcat server is starting up, check the /opt/alfresco/alfresco.log for any possible issues.
When alfresco.sh is executed for the 1st time, it will do some database setup, and youll see following messages in the alfresco.log (only the 1st time). Executing database script /opt/alfresco/tomcat/temp/Alfresco/*.sql All executed statements: /opt/alfresco/tomcat/temp/Alfresco/*.sql Applied patch [org.alfresco.repo.admin.patch.PatchExecuter] Look for the line in the log file where it says Alfresco started, which indicates that Alfresco was started successfully. Following are few sample lines from alfresco.log.
# tail -f /opt/alfresco/alfresco.log 21:29:25,431 INFO [org.alfresco.repo.domain.schema.SchemaBootstrap] Executing database script /opt/alfresco/tomcat/temp/Alfresco/AlfrescoSchemaMySQLInnoDBDialect-Update-3892772511531851057.sql (Copied from classpath:alfresco/dbscripts/create/3.3/org.hibernate.dialect.MySQLInnoDBDialect /AlfrescoCreate-3.3-RepoTables.sql). 21:29:27,245 INFO [org.alfresco.repo.domain.schema.SchemaBootstrap] All executed statements: /opt/alfresco/tomcat/temp/Alfresco/AlfrescoSchemaMySQLInnoDBDialect-All_Statements-4724137490855924607.sql. === Applied patch === ID: patch.db-V3.0-0-CreateActivitiesExtras RESULT: Script completed ===================================== 21:30:03,756 INFO [org.alfresco.service.descriptor.DescriptorService] Alfresco JVM - v1.6.0_21-b06; maximum heap size 910.250MB 21:30:03,756 INFO [org.alfresco.service.descriptor.DescriptorService] Alfresco started (Community): Current version 3.3.0 (2765) schema 4009 - Originally installed version 3.3.0 (2765) schema 4009
12. Modify the configuration file to reflect the new alfresco password.
Update the db.password parameter in the alfresco-global.properties file as shown below.
# vi /opt/alfresco/tomcat/shared/classes/alfresco-global.properties db.name=alfresco db.username=alfresco db.password=donttellanybody
After this, stop/start MySQL database and restart Alfresco Tomcat server. As a final step, make sure to take a backup of alfresco mysql database using mysqldump or mysqlhotcopy and /opt/alfresco directory.
# service mysqld restart # /opt/alfresco/alfresco.sh stop # /opt/alfresco/alfresco.sh start
24.Bugzilla bug tracking system: Bugzilla is the best open source bug tracking system. Everything you need to know to install and configure Bugzilla is explained here.
Object 134
Object 133
Bugzilla is the best open source bug tracking system. Very simple to use with lot of features. Bugzilla allows you to track the bugs and collaborate with developers and other teams in your organization effectively. This is a detailed step-by-step bugzilla installation guide for Linux.
Most Linux distributions comes with perl. If you dont have it on yours, download and install it from corresponding distribution website.
If you dont have mysql, install it as using yum groupinstall, or based on LAMP install article, or based on mysql rpm article.
3. Install Apache
If you already have apache installed, make sure you are able to access it by using http://{your-ipaddress}. If you dont have apache, install is using yum based on LAMP install article, or install apache from source.
To attempt an automatic install of every required and optional module with one command, do: /usr/bin/perl install-module.pl --all
Please review the output of the above install-module.pl to make sure everything got install properly. There is a possibility that some of the modules failed to install (may be because some required OS packages were missing). Execute the checksetup.pl to verify whether all the modules got installed properly. Following is the output of 2nd run of the checksetup.pl:
# ./checksetup.pl --check-modules COMMANDS TO INSTALL OPTIONAL MODULES: GD: Chart: Template-GD: GDTextUtil: GDGraph: XML-Twig: PerlMagick: SOAP-Lite: mod_perl: /usr/bin/perl /usr/bin/perl /usr/bin/perl /usr/bin/perl /usr/bin/perl /usr/bin/perl /usr/bin/perl /usr/bin/perl /usr/bin/perl install-module.pl install-module.pl install-module.pl install-module.pl install-module.pl install-module.pl install-module.pl install-module.pl install-module.pl GD Chart::Base Template::Plugin::GD::Image GD::Text GD::Graph XML::Twig Image::Magick SOAP::Lite mod_perl2
YOU MUST RUN ONE OF THE FOLLOWING COMMANDS (depending on which database you use): PostgreSQL: /usr/bin/perl install-module.pl DBD::Pg MySQL: /usr/bin/perl install-module.pl DBD::mysql Oracle: /usr/bin/perl install-module.pl DBD::Oracle
The following Perl modules are optional: Checking for GD (v1.20) ok: found v2.44 Checking for Chart (v1.0) ok: found v2.4.1 Checking for Template-GD (any) ok: found v1.56 Checking for GDTextUtil (any) ok: found v0.86 Checking for GDGraph (any) ok: found v1.44 Checking for XML-Twig (any) ok: found v3.34 Checking for MIME-tools (v5.406) ok: found v5.427 Checking for libwww-perl (any) ok: found v5.834 Checking for PatchReader (v0.9.4) ok: found v0.9.5 Checking for PerlMagick (any) ok: found v6.2.8 Checking for perl-ldap (any) ok: found v0.4001 Checking for Authen-SASL (any) ok: found v2.1401 Checking for RadiusPerl (any) ok: found v0.17 Checking for SOAP-Lite (v0.710.06) ok: found v0.711 Checking for HTML-Parser (v3.40) ok: found v3.65 Checking for HTML-Scrubber (any) ok: found v0.08 Checking for Email-MIME-Attachment-Stripper (any) ok: found v1.316 Checking for Email-Reply (any) ok: found v1.202 Checking for TheSchwartz (any) ok: found v1.10 Checking for Daemon-Generic (any) ok: found v0.61 Checking for mod_perl (v1.999022) ok: found v2.000004
./localconfig and rerun checksetup.pl. The following variables are new to ./localconfig since you last ran checksetup.pl: create_htaccess, webservergroup, db_driver, db_host, db_name, db_user, db_pass, db_port, db_sock, db_check, index_html, cvsbin, interdiffbin, diffpath, site_wide_secret
Since the localconfig file already exist, the second time when you execute the checksetup.pl, it will create the mysql database based on the information from localconfig file.
# ./checksetup.pl Creating database bugs... Building Schema object from database... Adding new table bz_schema ... Initializing the new Schema storage... Adding new table attach_data ... Adding new table attachments ... Adding new table bug_group_map ... Adding new table bug_see_also ... Adding new table bug_severity ... Adding new table bug_status ... Inserting values into the 'priority' table: Inserting values into the 'bug_status' table: Inserting values into the 'rep_platform' table: Creating ./data directory... Creating ./data/attachments directory... Creating ./data/duplicates directory... Adding foreign key: attachments.bug_id -> bugs.bug_id... Adding foreign key: attachments.submitter_id -> profiles.userid... Adding foreign key: bug_group_map.bug_id -> bugs.bug_id...
Expat.xs:12:19: error: expat.h: No such file or directory Expat.xs:60: error: expected specifier-qualifier-list before XML_Parser
In my case, ImageMagic-devel was missing. So, installed it as shown below. After that, Image::Magick perl module got installed successfully.
# yum install ImageMagick-devel # /usr/bin/perl install-module.pl Image::Magick
Starting httpd: Syntax error on line 994 of /etc/httpd/conf/httpd.conf: Can't locate Template/Config.pm in @INC
Starting httpd: Syntax error on line 994 of /etc/httpd/conf/httpd.conf: Can't locate DateTime/Locale.pm in @INC
Also, in your apache error_log if you see Digest/SHA.pm issue, you should install it as shown
below.
# tail -f /etc/httpd/logs/error_log Can't locate Digest/SHA.pm in @INC (@INC contains: # cpan cpan> install Digest::SHA
25.Rpm, deb, dpot and msi packages: This article explains how to view and extract files from various package types used by different Linux / UNIX distributions.
How to View and Extract Files from rpm, deb, depot and msi Packages
by Sasikala on April 19, 2010
Object 135 Object 137 Object 136
Question: How do I view or extract the files that are bundled inside the packages of various operating system. For example, I would like to know how to view (and extract) the content of a rpm, or deb, or depot, or msi file. Answer: You can use tools like rpm, rpm2cpio, ar, dpkg, tar, swlist, swcopy, lessmsi as explained below.
/usr/src/ovpc/ovpc-2.1.10/pcs
Explanation of the command: rpm -qlp ovpc-2.1.10.rpm rpm command q query the rpm file l list the files in the package p specify the package name
Extracting the files from a RPM package using rpm2cpio and cpio
RPM is a sort of a cpio archive. First, convert the rpm to cpio archive using rpm2cpio command. Next, use cpio command to extract the files from the archive as shown below.
$ rpm2cpio ovpc-2.1.10.rpm | cpio -idmv ./usr/src/ovpc/-5.10.0 ./usr/src/ovpc/ovpc-2.1.10/examples ./usr/src/ovpc/ovpc-2.1.10/examples/bin ./usr/src/ovpc/ovpc-2.1.10/examples/lib ./usr/src/ovpc/ovpc-2.1.10/examples/test . . . ./usr/src/ovpc/ovpc-2.1.10/pcs $ ls . usr
$ ls /tmp/ov ovpc
DEB files are ar archives, which always contains the three files debian-binary, control.tar.gz, and data.tar.gz. We can use ar command and tar command to extract and view the files from the deb package, as shown below. First, extract the content of *.deb archive file using ar command.
$ x x x $ ar -vx ovpc_1.06.94-3_i386.deb - debian-binary - control.tar.gz - data.tar.gz
swlist is a HP-UX command which is used to display the information about the software. View the content of the depot package as shown below using swlist command.
$ # # # # # swlist -l file -s /root/ovcsw_3672.depot Initializing... Contacting target "osgsw"... Target: osgsw:/root/ovcsw_3672.depot 8.50.000 9.00.140 Ocsw Server product Ocs Server Ovw
/etc/opt/OV/share/conf /etc/opt/OV/share/conf/OpC
Since depot files tar files, you can extract using normal tar extraction as shown below.
$ tar -xvf filename
4. MSI in Windows
Microsoft installer is an engine for the installation, maintenance, and removal of software on windows systems.
26.Backup using rsnapshot: You can backup either a local host or remote host using rsnapshot rsync utility. rsnapshot uses the combination of rsync and hard links to maintain full-backup and incremental backups. Once youve setup and configured rsnapshot, there is absolutely no maintenance involved in it. rsnapshot will automatically take care of deleting and rotating the old backups.
In the previous article we reviewed how to backup local unix host using rsnapshot utility. In this article, let us review how to backup remote Linux host using this utility.
Check out Linux crontab examples article to understand how to setup and configure crontab.
Troubleshooting Tips
Problem: rsnapshot failed with ERROR: /usr/bin/rsync returned 20 as shown below.
[root@local-host]# /usr/local/bin/rsnapshot hourly rsync error: received SIGINT, SIGTERM, or SIGHUP (code 20) at rsync.c(260) [receiver=2.6.8] ---------------------------------------------------------------------------rsnapshot encountered an error! The program was invoked with these options: /usr/local/bin/rsnapshot hourly ---------------------------------------------------------------------------ERROR: /usr/bin/rsync returned 20 while processing [email protected]:/etc/
Solution: This typically happens when the users who is performing the rsnapshot (rsync) doesnt have access to the remote directory that you are trying to backup. Make sure the remote host backup directory has appropriate permission for the user who is trying to execute the rsnapshot. 27.Create Linux user: This article explains how to create users with default configuration, create users with custom configuration, create users interactively, and creating users in bulk.
Creating users in Linux or Unix system is a routine task for system administrators. Sometimes you may create a single user with default configuration, or create a single user with custom configuration, or create several users at same time using some bulk user creation method. In this article, let us review how to create Linux users in 4 different methods using useradd, adduser and newusers command with practical examples.
While creating users as mentioned above, all the default options will be taken except group id. To view the default options give the following command with the option -D.
$ useradd -D GROUP=1001 HOME=/home INACTIVE=-1 EXPIRE= SHELL=/bin/sh SKEL=/etc/skel CREATE_MAIL_SPOOL=no
GROUP: This is the only option which will not be taken as default. Because if you dont specify -n option a group with same name as the user will be created and the user will be added to that group. To avoid that and to make the user as the member of the default group you need to give the option -n. HOME: This is the default path prefix for the home directory. Now the home directory will be created as /home/USERNAME. INACTIVE: -1 by default disables the feature of disabling the account once the user password has expired. To change this behavior you need to give a positive number which means if the password gets expired after the given number of days the user account will be disabled. EXPIRE: The date on which the user account will be disabled. SHELL: Users login shell. SKEL: Contents of the skel directory will be copied to the users home directory. CREATE_MAIL_SPOOL: According to the value creates or does not create the mail spool.
Example 1: Creating user with all the default options, and with his own group.
Following example creates user ramesh with group ramesh. Use Linux passwd command to change the password for the user immediately after user creation.
# useradd ramesh # passwd ramesh Changing password for user ramesh. New UNIX password: Retype new UNIX password: passwd: all authentication tokens updated successfully. # grep ramesh /etc/passwd ramesh:x:500:500::/home/ramesh:/bin/bash # grep ramesh /etc/group ramesh:x:500: [Note: default useradd command created ramesh as username and group]
Example 2: Creating an user with all the default options, and with the default group.
# useradd -n sathiya # grep sathiya /etc/passwd sathiya:x:511:100::/home/sathiya:/bin/bash # grep sathiya /etc/group [Note: No rows returned, as group sathiya was not created] # grep 100 /etc/group users:x:100: [Note: useradd -n command created user sathiya with default group id 100] # passwd sathiya Changing password for user sathiya. New UNIX password: Retype new UNIX password: passwd: all authentication tokens updated successfully. [Note: Always set the password immediately after user creation]
GROUP=100 HOME=/home INACTIVE=-1 EXPIRE= SHELL=/bin/ksh SKEL=/etc/skel [Note: Now the default shell changed to /bin/ksh] # adduser priya # grep priya /etc/passwd priya:x:512:512::/home/priya:/bin/ksh [Note: New users are getting created with /bin/ksh] # useradd -D -s /bin/bash [Note: Set it back to /bin/bash, as the above is only for testing purpose]
-s SHELL : Login shell for the user. -m : Create users home directory if it does not exist. -d HomeDir : Home directory of the user. -g Group : Group name or number of the user. UserName : Login id of the user.
Example 4: Crate Linux User with Custom Configurations Using useradd Command
The following example creates an account (lebron) with home directory /home/king, default shell as /bin/csh and with comment LeBron James.
# useradd -s /bin/csh -m -d /home/king -c "LeBron James" -g root lebron # grep lebron /etc/passwd lebron:x:513:0:LeBron James:/home/king:/bin/csh
Note: You can give the password using -p option, which should be encrypted password. Or you can use the passwd command to change the password of the user.
Note: While specifying passwords for users, please follow the password best practices including the 8-4 password rule that we discussed a while back. Now create accounts for Simpsons family together using the newusers command as shown below.
# newusers homer-family.txt
28.Mount and view ISO file: ISO files are typically used to distribute the operating system. Most of the linux operating system that you download will be on ISO format. This explains how to view and mount any ISO file both as regular use and as root user.
How To Mount and View ISO File as Root and Regular User in Linux
by Ramesh Natarajan on June 22, 2009
Object 144 Object 146 Object 145
ISO stands for International Organization for Standardization, which has defined the format for a disk image. In simple terms iso file is a disk image. ISO files are typically used to distribute the operating system. Most of the linux operating system that you download will be on ISO format. If you have downloaded an Linux ISO file you typically burn it onto a CD or DVD as ISO image. Once youve burned the ISO image in a CD or DVD, you can boot the system to install the Linux OS. But sometimes, you may just want to mount the ISO file and view the content without burning it to CD or DVD. In this article let us review how to Mount & View iso file as root and regular user in Linux Operating system.
For mounting you need to be logged in as root or you should have sudo permission. Read below to find out how to mount iso file as regular non-root user.
Problem:
# mount /downloads/Fedora-11-i386-DVD.iso /tmp/mnt mount: /downloads/Fedora-11-i386-DVD.iso is not a block device (maybe try `-o loop'?)
Solution: As it is suggested by the mount command, use the -o loop as the option.
# mount /downloads/Fedora-11-i386-DVD.iso /tmp/mnt -o loop
# cp some-file-inside-iso /home/test
Steps to extract the content from iso file as non root user.
1. 2. 3. 4. open mc. Navigate to the directory where the iso file is located. Select the iso file and press enter to view the content of the iso file. When you are inside the iso file, you will be able to view the contents of it. To copy a particular file from the iso file you can use the shell commands in shell prompt as.
$ cp some-file-inside-iso /tmp/mnt
5. You can also do this copy using the mc commands. 29.Manage password expiration and aging: Linux chage command can be used to perform several practical password aging activities including how-to force users to change their password.
Best practice recommends that users keep changing the passwords at a regular interval. But typically developers and other users of Linux system wont change the password unless they are forced to change their password. Its the system administrators responsibility to find a way to force developers to change their password. Forcing users to change their password with a gun on their head is not an option!. While most security conscious sysadmins may be even tempted to do that. In this article let us review how you can use Linux chage command to perform several practical password aging activities including how-to force users to change their password. On debian, you can install chage by executing the following command:
# apt-get install chage
Note: It is very easy to make a typo on this command. Instead of chage you may end up typing it as change. Please remember chage stands for change age. i.e chage command abbreviation is similar to chmod, chown etc.,
If user dhinesh tries to execute the same command for user ramesh, hell get the following permission denied message.
$ chage --list ramesh chage: permission denied
Note: However, a root user can execute chage command for any user account. When user dhinesh changes his password on Apr 23rd 2009, it will update the Last password change value as shown below. Please refer to our earlier article: Best Practices and Ultimate Guide For Creating Super Strong Password, which will help you to follow the best practices while changing password for your account.
$ date Thu Apr 23 00:15:20 PDT 2009 $ passwd dhinesh Enter new UNIX password: Retype new UNIX password: passwd: password updated successfully $ chage --list dhinesh Last password change Password expires Password inactive Account expires Minimum number of days between password change Maximum number of days between password change Number of days of warning before password expires : : : : : : : Apr 23, 2009 never never never 0 99999 7
# chage -M 10 dhinesh # chage --list dhinesh Last password change Password expires Password inactive Account expires Minimum number of days between password change Maximum number of days between password change Number of days of warning before password expires : : : : : : : Apr 23, 2009 May 03, 2009 never never 0 10 7
example, the Password inactive date is set to 10 days from the Password expires value. Once an account is locked, only system administrators will be able to unlock it.
# chage -I 10 dhinesh # chage -l dhinesh Last password change Password expires Password inactive Account expires Minimum number of days between password change Maximum number of days between password change Number of days of warning before password expires : : : : : : : Apr May May May 0 10 7 23, 03, 13, 31, 2009 2009 2009 2009
# chage -m 0 -M 99999 -I -1 -E -1 dhinesh # chage --list dhinesh Last password change Password expires Password inactive Account expires Minimum number of days between password change Maximum number of days between password change Number of days of warning before password expires : : : : : : : Apr 23, 2009 never never never 0 99999 7
This article was written by Dhineshkumar Manikannan. He is working at bk Systems (p) Ltd, and interested in contributing to the open source. The Geek Stuff welcomes your tips and guest articles 30.ifconfig examples: Interface configurator command ifconfig is used to initialize the network interface and to enable or disable the interfaces as shown in these 7 examples.
Object 152
Object 151
Ifconfig command is used to configure network interfaces. ifconfig stands for interface configurator. Ifconfig is widely used to initialize the network interface and to enable or disable the interfaces. In this article, let us review 7 common usages of ifconfig command.
3. Disable an Interface
# ifconfig eth0 down
4. Enable an Interface
# ifconfig eth0 up
Assign ip-address, netmask and broadcast at the same time to interface eht0.
# ifconfig eth0 192.168.2.2 netmask 255.255.255.0 broadcast 192.168.2.255
6. Change MTU
This will change the Maximum transmission unit (MTU) to XX. MTU is the maximum number of octets the interface is able to handle in one transaction. For Ethernet the Maximum transmission unit by default is 1500.
# ifconfig eth0 mtu XX
7. Promiscuous mode
By default when a network card receives a packet, it checks whether the packet belongs to itself. If not, the interface card normally drops the packet. But in promiscuous mode, the card doesnt drop the packet. Instead, it will accept all the packets which flows through the network card. Superuser privilege is required to set an interface in promiscuous mode. Most network monitor tools use the promiscuous mode to capture the packets and to analyze the network traffic. Following will put the interface in promiscuous mode.
# ifconfig eth0 promisc
This article was written by Lakshmanan G. He is working in bk Systems (p) Ltd, and interested in contributing to the open source. The Geek Stuff welcomes your tips and guest articles.
31.Oracle db startup an sthudown: Every sysadmin should know some basic DBA operations. This explains how to shutdown and start the oracle database.
Object 155
Object 154
For a DBA, starting up and shutting down of oracle database is a routine and basic operation. Sometimes Linux administrator or programmer may end-up doing some basic DBA operations on development database. So, it is important for non-DBAs to understand some basic database administration activities.
In this article, let us review how to start and stop an oracle database.
You can connect using either / as sysdba or an oracle account that has DBA privilege.
$ sqlplus '/ as sysdba' SQL*Plus: Release 10.2.0.3.0 - Production on Sun Jan 18 11:11:28 2009 Copyright (c) 1982, 2006, Oracle. All Rights Reserved. Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Production With the Partitioning and Data Mining options SQL>
If you want to startup Oracle with PFILE, pass it as a parameter as shown below.
SQL> STARTUP PFILE=/u01/app/oracle/product/10.2.0/dbs/init.ora
1. Normal Shutdown
During normal shutdown, before the oracle database is shut down, oracle will wait for all active users to disconnect their sessions. As the parameter name (normal) suggest, use this option to shutdown the database under normal conditions.
SQL> shutdown Database closed. Database dismounted. ORACLE instance shut down. SQL>
2. Shutdown Immediate
During immediate shutdown, before the oracle database is shut down, oracle will rollback active transaction and disconnect all active users. Use this option when there is a problem with your database and you dont have enough time to request users to log-off.
SQL> shutdown immediate; Database closed. Database dismounted. ORACLE instance shut down. SQL>
3. Shutdown Abort
During shutdown abort, before the oracle database is shutdown, all user sessions will be terminated immediately. Uncomitted transactions will not be rolled back. Use this option only during emergency situations when the shutdown and shutdown immediate doesnt work.
$ sqlplus SQL*Plus: Copyright Connected '/ as sysdba' Release 10.2.0.3.0 - Production on Sun Jan 18 11:11:33 2009 (c) 1982, 2006, Oracle. All Rights Reserved. to an idle instance.
32.PostgreSQL install and configure: Similar to mySQL, postgreSQL is very famous and feature packed free and open source database. This is a jumpstart guide to install and configure postgresql from source on Linux.
Object 158
Object 157
Similar to mySQL, postgreSQL is very famous and feature packed free and open source database. Earlier weve discussed several installations including LAMP stack installation, Apache2 installation from source, PHP5 installation from source and mySQL installation. In this article, let us review how to install postgreSQL database on Linux from source code.
# make make[3]: Leaving directory `/usr/save/postgresql-8.3.7/contrib/spi' rm -rf ./testtablespace mkdir ./testtablespace make[2]: Leaving directory `/usr/save/postgresql-8.3.7/src/test/regress' make[1]: Leaving directory `/usr/save/postgresql-8.3.7/src' make -C config all make[1]: Entering directory `/usr/save/postgresql-8.3.7/config' make[1]: Nothing to be done for `all'. make[1]: Leaving directory `/usr/save/postgresql-8.3.7/config' All of PostgreSQL successfully made. Ready to install. # make install make -C test/regress install make[2]: Entering directory `/usr/save/postgresql-8.3.7/src/test/regress' /bin/sh ../../../config/install-sh -c pg_regress '/usr/local/pgsql/lib/pgxs/src/test/regress/pg_regress' make[2]: Leaving directory `/usr/save/postgresql-8.3.7/src/test/regress' make[1]: Leaving directory `/usr/save/postgresql-8.3.7/src' make -C config install make[1]: Entering directory `/usr/save/postgresql-8.3.7/config' mkdir -p -- /usr/local/pgsql/lib/pgxs/config /bin/sh ../config/install-sh -c -m 755 ./install-sh '/usr/local/pgsql/lib/pgxs/config/install-sh' /bin/sh ../config/install-sh -c -m 755 ./mkinstalldirs '/usr/local/pgsql/lib/pgxs/config/mkinstalldirs' make[1]: Leaving directory `/usr/save/postgresql-8.3.7/config' PostgreSQL installation complete.
PostgreSQL ./configure options Following are various options that can be passed to the ./configure: prefix=PREFIX install architecture-independent files in PREFIX. Default installation location is /usr/local/pgsql enable-integer-datetimes enable 64-bit integer date/time support enable-nls[=LANGUAGES] enable Native Language Support disable-shared do not build shared libraries disable-rpath do not embed shared library search path in executables disable-spinlocks do not use spinlocks enable-debug build with debugging symbols (-g) enable-profiling build with profiling enabled enable-dtrace build with DTrace support enable-depend turn on automatic dependency tracking enable-cassert enable assertion checks (for debugging) enable-thread-safety make client libraries thread-safe enable-thread-safety-force force thread-safety despite thread test failure disable-largefile omit support for large files with-docdir=DIR install the documentation in DIR [PREFIX/doc] without-docdir do not install the documentation with-includes=DIRS look for additional header files in DIRS with-libraries=DIRS look for additional libraries in DIRS with-libs=DIRS alternative spelling of with-libraries with-pgport=PORTNUM change default port number [5432] with-tcl build Tcl modules (PL/Tcl)
with-tclconfig=DIR tclConfig.sh is in DIR with-perl build Perl modules (PL/Perl) with-python build Python modules (PL/Python) with-gssapi build with GSSAPI support with-krb5 build with Kerberos 5 support with-krb-srvnam=NAME default service principal name in Kerberos [postgres] with-pam build with PAM support with-ldap build with LDAP support with-bonjour build with Bonjour support with-openssl build with OpenSSL support without-readline do not use GNU Readline nor BSD Libedit for editing with-libedit-preferred prefer BSD Libedit over GNU Readline with-ossp-uuid use OSSP UUID library when building contrib/uuid-ossp with-libxml build with XML support with-libxslt use XSLT support when building contrib/xml2 with-system-tzdata=DIR use system time zone data in DIR without-zlib do not use Zlib with-gnu-ld assume the C compiler uses GNU ld [default=no]
PostgreSQL Installation Issue1: You may encounter the following error message while performing ./configure during postgreSQL installation.
# ./configure checking for -lreadline... no checking for -ledit... no configure: error: readline library not found If you have readline already installed, see config.log for details on the failure. It is possible the compiler isn't looking in the proper directory. Use --without-readline to disable readline support.
PostgreSQL Installation Solution1: Install the readline-devel and libtermcap-devel to solve the above issue.
# rpm -ivh libtermcap-devel-2.0.8-46.1.i386.rpm readline-devel-5.1-1.1.i386.rpm warning: libtermcap-devel-2.0.8-46.1.i386.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159 Preparing... ########################################### [100%] 1:libtermcap-devel ########################################### [ 50%] 2:readline-devel ########################################### [100%]
test=#
33.Magic SysRq key: Have you wondered what the SysRq key on your keyboard does. Here is one use for it. You can safely reboot Linux using the magic SysRq key as explained here.
This is a guest post written by Lakshmanan G. If you are working on kernel development, or device drivers, or running a code that could cause kernel panic, SysRq key will be very valuable. The magic SysRq key is a key combination in the Linux kernel which allows the user to perform various low level commands regardless of the systems state. It is often used to recover from freezes, or to reboot a computer without corrupting the filesystem. The key combination consists of Alt+SysRq+commandkey. In many systems the SysRq key is the printscreen key. First, you need to enable the SysRq key, as shown below.
echo "1" > /proc/sys/kernel/sysrq
p Print the current registers and flags to the console. 0-9 Sets the console log level, controlling which kernel messages will be printed to your console. f Will call oom_kill to kill process which takes more memory. h Used to display the help. But any other keys than the above listed will print help. We can also do this by echoing the keys to the /proc/sysrq-trigger file. For example, to re-boot a system you can perform the following.
echo "b" > /proc/sysrq-trigger
34.Wakeonlan Tutorial: Using Wakeonlan WOL, you can turn on the remote servers where you dont have physical access to press the power button.
Wakeonlan (wol) enables you to switch ON remote servers without physically accessing it. Wakeonlan sends magic packets to wake-on-LAN enabled ethernet adapters and motherboards to switch on remote computers. By mistake, when you shutdown a system instead of rebooting, you can use Wakeonlan to power on the server remotely. Also, If you have a server that dont need to be up and running 247, you can turn off and turn on the server remotely anytime you want. This article gives a brief overview of Wake-On-LAN and instructions to set up Wakeonlan feature.
Overview of Wake-On-LAN
You can use Wakeonlan when a machine is connected to LAN, and you know the MAC address of that machine. Your NIC should support wakeonlan feature, and it should be enabled before the shut down. In most cases, by default wakeonlan is enabled on the NIC. You need to send the magic packet from another machine which is connected to the same network ( LAN ). You need root access to send magic packet. wakeonlan package should be installed on the machine. When the system crashes because of power failure, for the first time you cannot switch on your machine using this facility. But after the first first boot you can use wakeonlan to turn it on, if the server gets shutdown for some reason. WakeonLan is also referred as wol.
If Supports Wake-on is g, then the support for wol feature is enabled on the NIC card.
Note: You should execute ethtool as root, else you may get following error message.
$ /sbin/ethtool eth0 Settings for eth0: Cannot get device settings: Operation not permitted Cannot get wake-on-lan settings: Operation not permitted Current message level: 0x000000ff (255) Cannot get link status: Operation not permitted
35.List hardware spec using lshw: ls+hw = lshw, which lists the hardware specs of your system.
How To Get Hardware Specs of Your System Using lshw Hardware Lister
by Ramesh Natarajan on December 22, 2008
Object 165 Object 167 Object 166
This is a guest post written by SathiyaMoorthy. lshw (Hardware Lister) command gives a comprehensive report about all hardware in your system. This displays detailed information about manufacturer, serial number of the system, motherboard,
CPU, RAM, PCI cards, disks, network card etc., Using lshw, you can get information about the hardware without touching a screwdriver to open the server chassis. This is also very helpful when the server is located in a remote data center, where you dont have physical access to the server. In our previous article, we discussed about how to display hardware information on linux using dmidecode command. In this article, let us review how to view the hardware specifications using lshw command.
Download lshw
Download the latest version of lshw from Hardware Lister website. Extract the source code to the /usr/src as shown below.
# # # # cd /usr/src wget http://ezix.org/software/files/lshw-B.02.13.tar.gz gzip -d lshw-B.02.13.tar.gz tar xvf lshw-B.02.13.tar
Note: To install the pre-compiled version, download it from Hardware Lister website.
Install lshw
Install lshw as shown below. This will install lshw in the /usr/sbin directory.
# make # make install make -C src install make[1]: Entering directory `/usr/src/lshw-B.02.13/src' make -C core all make[2]: Entering directory `/usr/src/lshw-B.02.13/src/core' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/usr/src/lshw-B.02.13/src/core' g++ -L./core/ -g -Wl,--as-needed -o lshw lshw.o -llshw -lresolv install -p -d -m 0755 ///usr/sbin install -p -m 0755 lshw ///usr/sbin install -p -d -m 0755 ///usr/share/man/man1 install -p -m 0644 lshw.1 ///usr/share/man/man1 install -p -d -m 0755 ///usr/share/lshw install -p -m 0644 pci.ids usb.ids oui.txt manuf.txt ///usr/share/lshw make[1]: Leaving directory `/usr/src/lshw-B.02.13/src'
Note: lshw must be run as root to get a full report. lshw will display partial report with a warning message as shown below when you execute it from a non-root user.
jsmith@local-host ~> /usr/sbin/lshw WARNING: you should run this program as super-user.
lshw Classes
To get information about a specific hardware, you can use -class option. Following classes can be used with the -class option in the lshw command.
address bridge bus communication disk display generic input memory multimedia network power printer processor storage system tape volume
bus info: scsi@0:2.0.0 logical name: /dev/sda version: 516A size: 68GiB (73GB) capabilities: partitioned partitioned:dos configuration: ansiversion=2 signature=000e1213
36.View hardware spec using dmidecode: dmidecode command reads the system DMI table to display hardware and BIOS information of the server. Apart from getting current configuration of the system, you can also get information about maximum supported configuration of the system using dmidecode. For example, dmidecode gives both the current RAM on the system and the maximum RAM supported by the system.
dmidecode command reads the system DMI table to display hardware and BIOS information of the server. Apart from getting current configuration of the system, you can also get information about maximum supported configuration of the system using dmidecode. For example, dmidecode gives both the current RAM on the system and the maximum RAM supported by the system. This article provides an overview of the dmidecode and few practical examples on how to use dmidecode command.
1. Overview of dmidecode
Distributed Management Task Force maintains the DMI specification and SMBIOS specification. The output of the dmidecode contains several records from the DMI (Desktop Management interface) table. Following is the record format of the dmidecode output of the DMI table.
Record Header: Handle {record id}, DMI type {dmi type id}, {record size} bytes Record Value: {multi line record value}
record id: Unique identifier for every record in the DMI table. dmi type id: Type of the record. i.e BIOS, Memory etc., record size: Size of the record in the DMI table. multi line record values: Multi line record value for that specific DMI type.
Get the total number of records in the DMI table as shown below:
# dmidecode | grep ^Handle | wc -l 56 (or) # dmidecode | grep structures 56 structures occupying 1977 bytes.
2. DMI Types
DMI Type id will give information about a particular hardware component of your system. Following command with type id 4 will get the information about CPU of the system.
# dmidecode -t 4 # dmidecode 2.9 SMBIOS 2.3 present. Handle 0x0400, DMI type 4, 35 bytes Processor Information Socket Designation: Processor 1 Type: Central Processor Family: Xeon Manufacturer: Intel ID: 29 0F 00 00 FF FB EB BF Signature: Type 0, Family 15, Model 2, Stepping 9 Flags: FPU (Floating-point unit on-chip) VME (Virtual mode extension) DE (Debugging extension) PSE (Page size extension) TSC (Time stamp counter) MSR (Model specific registers)
4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39
Processor Memory Controller Memory Module Cache Port Connector System Slots On Board Devices OEM Strings System Configuration Options BIOS Language Group Associations System Event Log Physical Memory Array Memory Device 32-bit Memory Error Memory Array Mapped Address Memory Device Mapped Address Built-in Pointing Device Portable Battery System Reset Hardware Security System Power Controls Voltage Probe Cooling Device Temperature Probe Electrical Current Probe Out-of-band Remote Access Boot Integrity Services System Boot 64-bit Memory Error Management Device Management Device Component Management Device Threshold Data Memory Channel IPMI Device Power Supply
Instead of type_id, you can also pass the keyword to the -t option of the dmidecode command. Following are the available keywords.
Keyword Types -----------------------------bios 0, 13 system 1, 12, 15, 23, 32 baseboard 2, 10 chassis 3 processor 4 memory 5, 6, 16, 17 cache 7 connector 8 slot 9
For example, to get all the system baseboard related information execute the following command, which will display the type_id 2 and 10
# dmidecode -t baseboard # dmidecode 2.9 SMBIOS 2.3 present. Handle 0x0200, DMI type 2, 9 bytes Base Board Information Manufacturer: Dell Computer Corporation Product Name: 123456
Version: A05 Serial Number: ..CN123456789098. Handle 0x0A00, DMI type 10, 14 bytes On Board Device 1 Information Type: SCSI Controller Status: Enabled Description: LSI Logic 53C1030 Ultra 320 SCSI On Board Device 2 Information Type: SCSI Controller Status: Enabled Description: LSI Logic 53C1030 Ultra 320 SCSI On Board Device 3 Information Type: Video Status: Enabled Description: ATI Rage XL PCI Video On Board Device 4 Information Type: Ethernet Status: Enabled Description: Broadcom Gigabit Ethernet 1 On Board Device 5 Information Type: Ethernet Status: Enabled Description: Broadcom Gigabit Ethernet 2
How much memory can I expand to? From /proc/meminfo you can find out the total current memory of your system as shown below.
# grep MemTotal /proc/meminfo MemTotal: 1034644 kB
In this example, the system has 1GB of RAM. Is this 1 x 1GB (or) 2 x 512MB (or) 4 x 256MB? This can be figured out by passing the type id 17 to the dmidecode command as shown below. Please note in the example below, if you have to expand upto 8GB of maximum RAM, you need to remove the existing 512MB from slot 1 and 2, and use 2GB RAM on all the 4 memory slots.
# dmidecode -t 17 # dmidecode 2.9 SMBIOS 2.3 present. Handle 0x1100, DMI type 17, 23 bytes
Memory Device Array Handle: 0x1000 Error Information Handle: Not Provided Total Width: 72 bits Data Width: 64 bits Size: 512 MB [Note: Slot1 has 512 MB RAM] Form Factor: DIMM Set: 1 Locator: DIMM_1A Bank Locator: Not Specified Type: DDR Type Detail: Synchronous Speed: 266 MHz (3.8 ns) Handle 0x1101, DMI type 17, 23 bytes Memory Device Array Handle: 0x1000 Error Information Handle: Not Provided Total Width: 72 bits Data Width: 64 bits Size: 512 MB [Note: Slot2 has 512 MB RAM] Form Factor: DIMM Set: 1 Locator: DIMM_1B Bank Locator: Not Specified Type: DDR Type Detail: Synchronous Speed: 266 MHz (3.8 ns) Handle 0x1102, DMI type 17, 23 bytes Memory Device Array Handle: 0x1000 Error Information Handle: Not Provided Total Width: 72 bits Data Width: 64 bits Size: No Module Installed [Note: Slot3 is empty] Form Factor: DIMM Set: 2 Locator: DIMM_2A Bank Locator: Not Specified Type: DDR Type Detail: Synchronous Speed: 266 MHz (3.8 ns) Handle 0x1103, DMI type 17, 23 bytes Memory Device Array Handle: 0x1000 Error Information Handle: Not Provided Total Width: 72 bits Data Width: 64 bits Size: No Module Installed [Note: Slot4 is empty] Form Factor: DIMM Set: 2 Locator: DIMM_2B Bank Locator: Not Specified Type: DDR Type Detail: Synchronous Speed: 266 MHz (3.8 ns)
5. View Manufacturer, Model and Serial number of the equipment using dmidecode
You can get information about the make, model and serial number of the equipment as shown below:
# dmidecode -t system # dmidecode 2.9 SMBIOS 2.3 present. Handle 0x0100, DMI type 1, 25 bytes System Information Manufacturer: Dell Computer Corporation Product Name: PowerEdge 1750 Version: Not Specified Serial Number: 1234567 UUID: 4123454C-4123-1123-8123-12345603431 Wake-up Type: Power Switch
Handle 0x0C00, DMI type 12, 5 bytes System Configuration Options Option 1: NVRAM_CLR: Clear user settable NVRAM areas and set defaults Option 2: PASSWD: Close to enable password Handle 0x2000, DMI type 32, 11 bytes System Boot Information Status: No errors detected
37.Use the support effectively: Companies spend lot of cash on support mainly for two reasons: 1) To get help from vendors to fix critical production issues 2) To keep up-to-date with the latest version of the software and security patches released by the vendors. In this article, Ive given 10 practical tips for DBAs, sysadmins and developers to use their hardware and software support effectively.
Companies purchase support for most of their enterprise hardwares (servers, switches, routers, firewalls etc.,) and softwares (databases, OS, applications, frameworks etc.,). They spend lot of cash on support mainly for two reasons: 1) To get help from vendors to fix critical production issues 2) To keep up-to-date with the latest version of the software and security patches released by the vendors. In this article, Ive given 10 practical tips for DBAs, sysadmins and developers to use their hardware and software support effectively.
ticket from their website, call the support to follow-up and make sure an engineer is getting assigned to it immediately. If they dont have a support website, ask them whether you can create a ticket by sending an email.
How To Install Or Upgrade LAMP: Linux, Apache, MySQL and PHP Stack Using Yum
by Ramesh Natarajan on September 15, 2008
Object 174 Object 176 Object 175
Previously we discussed about how to install Apache and PHP from source. Installing LAMP stack from source will give you full control to configure different parameters. Installing LAMP stack using yum is very easy and takes only minutes. This is a good option for beginners who dont feel comfortable installing from source. Also, Installing LAMP stack using yum is a good choice, if you want to keep things simple and just use the default configuration.
# rpm -qa | grep httpd [Note: If the above command did not return anything, install apache as shown below] # yum install httpd
Enable httpd service to start automatically during system startup using chkconfig. Start the Apache as shown below.
# chkconfig httpd on # service httpd start Starting httpd: [ OK ]
Check whether latest version of Apache is available for installation using yum.
# yum check-update httpd Loaded plugins: refresh-packagekit httpd.i386 2.2.9-1.fc9 updates [Note: This indicates that the latest Apache version 2.2.9 is available for upgrade]
Package Arch Version Repository Size ============================================================================= Updating: httpd i386 2.2.9-1.fc9 updates 975 k httpd-tools i386 2.2.9-1.fc9 updates 69 k Transaction Summary ============================================================================= Install 0 Package(s) Update 2 Package(s) Remove 0 Package(s) Total download size: 1.0 M Is this ok [y/N]: y Downloading Packages: (1/2): httpd-tools-2.2.9-1.fc9.i386.rpm (2/2): httpd-2.2.9-1.fc9.i386.rpm Running rpm_check_debug Running Transaction Test Finished Transaction Test Transaction Test Succeeded Running Transaction Updating : httpd-tools Updating : httpd Cleanup : httpd Cleanup : httpd-tools
| 69 kB | 975 kB
00:00 00:00
Yum is very smart to identify all the dependencies and install those automatically. For example, while installing mysql-server using yum, it also automatically installs the depended mysql-libs, perl-DBI, mysql, perl-DBD-MySQL packages as shown below.
# yum install mysql-server
--> Processing Dependency: mysql = 5.0.51a-1.fc9 for package: mysql-server --> Processing Dependency: libmysqlclient.so.15 for package: mysql-server --> Processing Dependency: perl(DBI) for package: mysql-server --> Processing Dependency: perl-DBD-MySQL for package: mysql-server --> Processing Dependency: libmysqlclient_r.so.15 for package: mysql-server --> Running transaction check ---> Package mysql.i386 0:5.0.51a-1.fc9 set to be updated ---> Package mysql-libs.i386 0:5.0.51a-1.fc9 set to be updated ---> Package perl-DBD-MySQL.i386 0:4.005-8.fc9 set to be updated ---> Package perl-DBI.i386 0:1.607-1.fc9 set to be updated --> Finished Dependency Resolution Dependencies Resolved ============================================================================= Package Arch Version Repository Size ============================================================================= Installing: mysql-server i386 5.0.51a-1.fc9 fedora 9.8 M Installing for dependencies: mysql i386 5.0.51a-1.fc9 fedora 2.9 M mysql-libs i386 5.0.51a-1.fc9 fedora 1.5 M perl-DBD-MySQL i386 4.005-8.fc9 fedora 165 k perl-DBI i386 1.607-1.fc9 updates 776 k Transaction Summary ============================================================================= Install 5 Package(s) Update 0 Package(s) Remove 0 Package(s) Total download size: 15 M Is this ok [y/N]: y Downloading Packages: (1/5): perl-DBD-MySQL-4.005-8.fc9.i386.rpm (2/5): perl-DBI-1.607-1.fc9.i386.rpm (3/5): mysql-libs-5.0.51a-1.fc9.i386.rpm (4/5): mysql-5.0.51a-1.fc9.i386.rpm (5/5): mysql-server-5.0.51a-1.fc9.i386.rpm Running rpm_check_debug Running Transaction Test Finished Transaction Test Transaction Test Succeeded Running Transaction Installing : mysql-libs Installing : perl-DBI Installing : mysql Installing : perl-DBD-MySQL Installing : mysql-server
| | | | |
kB kB MB MB MB
Installed: mysql-server.i386 0:5.0.51a-1.fc9 Dependency Installed: mysql.i386 0:5.0.51a-1.fc9 mysql-libs.i386 0:5.0.51a-1.fc9 perl-DBD-MySQL.i386 0:4.005-8.fc9 perl-DBI.i386 0:1.607-1.fc9 Complete!
mysql-5.0.51a-1.fc9.i386 # mysql -V mysql Ver 14.12 Distrib 5.0.51a, for redhat-linux-gnu (i386) using readline 5.0
The first time when you start mysqld, it will give additional information message indicating to perform post-install configuration as shown below.
Initializing MySQL database: Installing MySQL system tables... OK Filling help tables... OK To start mysqld at boot time you have to copy support-files/mysql.server to the right place for your system PLEASE REMEMBER TO SET A PASSWORD FOR THE MySQL root USER ! To do so, start the server, then issue the following commands: /usr/bin/mysqladmin -u root password 'new-password' /usr/bin/mysqladmin -u root -h dev-db password 'new-password' Alternatively you can run: /usr/bin/mysql_secure_installation which will also give you the option of removing the test databases and anonymous user created by default. This is highly recommended for production servers. See the manual for more instructions. You can start the MySQL daemon with: cd /usr ; /usr/bin/mysqld_safe & You can test the MySQL daemon with mysql-test-run.pl cd mysql-test ; perl mysql-test-run.pl Please report any problems with the /usr/bin/mysqlbug script! The latest information about MySQL is available on the web at http://www.mysql.com Support MySQL by buying support/licenses at http://shop.mysql.com Starting MySQL: [ OK ]
To fix this problem, you need to assign a password to mysql root account as shown below. Execute mysql_secure_installation script, which performs the following activities: Assign the root password Remove the anonymous user Disallow root login from remote machines Remove the default sample test database
# /usr/bin/mysql_secure_installation
Cleaning up... All done! If you've completed all of the above steps, your MySQL installation should now be secure. Thanks for using MySQL!
Check whether a latest version of MySQL is available for installation using yum.
# yum check-update mysql-server
Setting up Install Process Parsing package install arguments Resolving Dependencies --> Running transaction check ---> Package php.i386 0:5.2.6-2.fc9 set to be updated --> Processing Dependency: php-common = 5.2.6-2.fc9 for package: php --> Processing Dependency: php-cli = 5.2.6-2.fc9 for package: php --> Running transaction check ---> Package php-common.i386 0:5.2.6-2.fc9 set to be updated ---> Package php-cli.i386 0:5.2.6-2.fc9 set to be updated --> Finished Dependency Resolution Dependencies Resolved ============================================================================= Package Arch Version Repository Size ============================================================================= Installing: php i386 5.2.6-2.fc9 updates 1.2 M Installing for dependencies: php-cli i386 5.2.6-2.fc9 updates 2.3 M php-common i386 5.2.6-2.fc9 updates 228 k Transaction Summary ============================================================================= Install 3 Package(s) Update 0 Package(s) Remove 0 Package(s) Total download size: 3.8 M Is this ok [y/N]: y Downloading Packages: (1/3): php-common-5.2.6-2.fc9.i386.rpm (2/3): php-5.2.6-2.fc9.i386.rpm (3/3): php-cli-5.2.6-2.fc9.i386.rpm Running rpm_check_debug Running Transaction Test Finished Transaction Test Transaction Test Succeeded Running Transaction Installing : php-common [1/3] Installing : php-cli [2/3] Installing : php [3/3]
Installed: php.i386 0:5.2.6-2.fc9 Dependency Installed: php-cli.i386 0:5.2.6-2.fc9 php-common.i386 0:5.2.6-2.fc9 Complete!
| |
62 kB 81 kB
00:00 00:00
[1/2] [2/2]
If you need additional PHP modules, install them using yum as shown below.
# yum install php-common php-mbstring php-mcrypt php-devel php-xml php-gd
Check whether a latest version of PHP is available for installation using yum.
# yum check-update php
Upgrade any additional PHP modules that youve installed using yum.
# yum check-update php-common php-mbstring php-mcrypt php-devel php-xml php-gd # yum update php-common php-mbstring php-mcrypt php-devel php-xml php-gd
Invoke the test.php from the browser http://{lamp-server-ip}/test.php , which will display all PHP configuration information and the installed modules. 39.Template to track your hardware assests: If you are managing more than one equipment in your organization, it is very important to document and track ALL information about the servers effectively. In this article, I have listed 36 attributes that needs to be tracked for your equipments, with an explanation on why it needs to be tracked. I have also provided a spreadsheet template with these fields that will give you a jumpstart.
If you are managing more than one equipment in your organization, it is very important to document and track ALL information about the servers effectively. In this article, I have listed 36 attributes that needs to be tracked for your equipments, with an explanation on why it needs to be tracked. I have also provided a spreadsheet template with these fields that will give you a jumpstart. Before getting into the details of what needs to be tracked, let us look at few reasons on why you should document ALL your equipments. Identifying What needs to be tracked is far more important than How you are tracking it. Dont get trapped into researching the best available asset tracking software. Keep it simple and use a spread sheet for tracking. Once you have documented everything, later you can always find a software and export this data to it. Sysadmins hates to document anything. They would rather spend time exploring cool new technology than documenting their current hardware and environment. But, a seasoned sysadmin knows that spending time to document the details about the equipemnts, is going to save lot of time in the future, when there is a problem. Never assume anything. When it comes to documentation, the more details you can add is better.
Dont create document because your boss is insisting on it. Instead, create the document because you truly believe it will add value to you and your team. If you document without understanding or believing the purpose, you will essentially leave out lot of critical details, which will eventually make the document worthless. Once youve captured the attributes mentioned below for ALL your servers, switches, firewalls and other equipments, you can use this master list to track any future enterprise wide implementation/changes. For e.g. If you are rolling out a new backup strategy throughout your enterprise, add a new column called backup and mark it as Yes or No, to track whether that specific action has been implemented on that particular equipment. I have arranged the 36 items into 9 different groups and provided a sample value next to the field name within parenthesis. These fields and groupings are just guidelines. If required, modify this accordingly to track additional attributes specific to your environment.
Equipment Detail
(1) Description (Production CRM DB Server) This field should explain the purpose of this equipment. Even a non-IT person should be able to identify this equipment based on this description. (2) Host Name (prod-crm-db-srv) The real host name of the equipment as defined at the OS level. (3) Department (Sales) Which department does this equipment belong to? (4) Manufacturer (DELL) Manufacturer of the equipment. (5) Model (PowerEdge 2950) Model of the equipment. (6) Status (Active) The current status of the equipment. Use this field to identify whether the equipment is in one of the following state: Active Currently in use Retired Old equipment, not getting used anymore Available Old/New equipment, ready and available for usage (7) Category (Server) I primarily use this to track the type of equipment. The value in this field could be one of the following depending the equipment: Server Switch Power Circuit Router Firewall etc.
Tag/Serial#
For tracking purpose, different vendors use different names for the serial numbers. i.e Serial Number, Part Number, Asset Number, Service Tag, Express Code etc. For e.g. DELL tracks their equipment using Service Tag and Express code. So, if majority of the equipments in your organization are DELL, it make sense to have separate columns for Service Tag and Express Code. (8) Serial Number (9) Part Number (10) Service TAG (11) Express Code (12) Company Asset TAG Every organization may have their own way of tracking the system using bar code or custom asset tracking number. Use this field to track the equipment using the code
Location
(13) Physical Location (Los Angeles) Use this field to specify the physical location of the server. If you have multiple data center in different cities, use the city name to track it. (14) Cage/Room# The cage or room number where this equipment is located. (15) Rack # If there are multiple racks inside your datacenter, specify the rack # where the equipment is located. If your rack doesnt have any numbers, create your own numbering scheme for the rack. (16) Rack Position This indicates the exact location of the server within the rack. for e.g. the server located at the bottom of the rack has the rack position of #1 and the one above is #2.
Network
(17) Private IP (192.168.100.1) Specify the internal ip-address of the equipment. (18) Public IP Specify the external ip-address of the equipment. (19) NIC (GB1, Slot1/Port1) Tracking this information is very helpful, when someone accidentally pulls a cable from the server (If this never happened to you, it is only a matter of time before it happens). Using this field value, you will know exactly where to plug-in the cable. If the server has more than one network connection, specify all the NICs using a comma separated value. In this example (GB1, Slot1/Port1), the server has two ethernet cables connected. First one connected to the on-board NIC marked as GB1. Second one connected to the Port#1 on the NIC card, inserted to the PCI Slot#1. Even when the server has only one ethernet cable connected, specify the port # to which it is connected. For e.g. Most of the DELL servers comes with two on-board NIC labeled as GB1 and GB2. So, you should know to which NIC youve connected your ethernet cable. (20) Switch/Port (Switch1/Port10, Switch4/Port15) Using the NIC field above, youve tracked the exact port where one end of the ethernet cable is connected on the server. Now, you should track where the other end of the cable is connected to. In this example the cable connected to the server on the GB1 is connected to the Port 10 on Switch 1. The cable connected to the server on Port#1of PCI Slot#1 is connected to the Port 15 on Switch 4. (21) Nagios Monitored? (Yes) Use this field to indicate whether this equipment is getting monitored through any monitoring software.
Storage
(22) SAN/NAS Connected? (Yes) Use this field to track whether a particular server is connected to an external storage. (23) Total Drive Count (4) This indicates the total number of internal drives on the server. This can come very handy for capacity management. for e.g. Some of the dell servers comes only with 6 slots for internal hard-drives. In this example, just by looking at the document, we know that there are 4 disk drives in the servers and you have room to add 2 more disk drives.
OS Detail
(24) OS (Linux) Use this field to track the OS that is running on the equipment. For e.g. Linux, Windows, Cisco IOS etc. (25) OS Version (Red Hat Enterprise Linux AS release 4 (Nahant Update 5)) The exact version of the OS.
Warranty
(26) Warrenty Start Date (27) Warrenty End Date
Additional Information
(35) URL If this is a web-server, give the URL to access the web application running on the system. If this is a switch or router, specify the admin URL. (36) Notes Enter additional notes about the equipment that doesnt fit under any of the above fields. It may be very tempting to add username and password fields to this spreadsheet. For security reasons, never use this spreadsheet to store the root or administrator password of the equipment. Asset Tracking Excel Template 1.0 This excel template contains all the 36 fields mentioned above to give you a jumpstart on tracking equipments in your enterprise. If you convert this spread sheet to other formats used by different tools, send it to me and Ill add it here and give credit to you. I hope you find this article helpful. Forward this to appropriate person in your organization who may benefit from this article by tracking the equipments effectively. Also, If you think Ive missed any attribute to track in the above list, please let me know. 40.Disable SELinux: If you dont understand how SELinux works and the fundamental details on how to configure it, keeping it enabled will cause lot of issues. Until you understand the implementation details of SELinux you may want to disable it to avoid some unnecessary issues as explained here.
On some of the Linux distribution SELinux is enabled by default, which may cause some unwanted issues, if you dont understand how SELinux works and the fundamental details on how to configure it. I strongly recommend that you understand SELinux and implement it on your environment. But, until you understand the implementation details of
SELinux you may want to disable it to avoid some unnecessary issues. To disable SELinux you can use any one of the 4 different methods mentioned in this article. The SELinux will enforce security policies including the mandatory access controls defined by the US Department of Defence using the Linux Security Module (LSM) defined in the Linux Kernel. Every files and process in the system will be tagged with specific labels that will be used by the SELinux. You can use ls -Z and view those labels as shown below.
# ls -Z /etc/ -rw-r--r-- root -rw-r--r-- root -rw-r--r-- root drwxr-x--- root drwxr-xr-x root drwxr-xr-x root drwx------ root -rw-rw-r-- root root root root root root root root disk system_u:object_r:etc_t:s0 a2ps.cfg system_u:object_r:adjtime_t:s0 adjtime system_u:object_r:etc_aliases_t:s0 aliases system_u:object_r:auditd_etc_t:s0 audit system_u:object_r:etc_runtime_t:s0 blkid system_u:object_r:bluetooth_conf_t:s0 bluetooth system_u:object_r:system_cron_spool_t:s0 cron.d system_u:object_r:amanda_dumpdates_t:s0 dumpdates
You can also use setenforce command as shown below to disable SELinux. Possible parameters to setenforce commands are: Enforcing , Permissive, 1 (enable) or 0 (disable).
# setenforce 0
Following are the possible values for the SELINUX variable in the /etc/selinux/config file enforcing The Security Policy is always Encoforced permissive - This just simulates the enforcing policy by only printing warning messages and not really enforcing the SELinux. This is good to first see how SELinux works and later figure out what policies should be enforced. disabled - Completely disable SELinux
Following are the possible values for SELINUXTYPE variable in the /etc/selinux/config file. This indicates the type of policies that can be used for the SELinux. targeted - This policy will protected only specific targeted network daemons. strict - This is for maximum SELinux protection.
Set SELinux boolean value using setsebool command as shown below. Make sure to restart the HTTP service after this change.
# setsebool httpd_disable_trans 1 # service httpd restart
41.Install PHP5 from source: This is a step-by-step guide to install PHP5 from source on UNIX environment.
Object 183
Object 185
Object 184
All Linux distributions comes with PHP. However, it is recommended to download latest PHP source code, compile and install on Linux. This will make it easier to upgrade PHP on an ongoing basis immediately after a new patch or release is available for download from PHP. This article explains how to install PHP5 from source on Linux.
1. Prerequisites
Apache web server should already be installed. Refer to my previous post on How to install Apache 2 on Linux. If you are planning to use PHP with MySQL, you should have My SQL already installed. I wrote about How to install MySQL on Linux.
2. Download PHP
Download the latest source code from PHP Download page. Current stable release is 5.2.6. Move the source to /usr/local/src and extract is as shown below.
# bzip2 -d php-5.2.6.tar.bz2 # tar xvf php-5.2.6.tar
3. Install PHP
View all configuration options available for PHP using ./configure -help (two hyphen in front of help). The most commonly used option is -prefix={install-dir-name} to install PHP on a user defined directory.
# cd php-5.2.6 # ./configure --help
In the following example, PHP will be compiled and installed under the default location /usr/local/lib with Apache configuration and MySQL support.
# # # # ./configure --with-apxs2=/usr/local/apache2/bin/apxs --with-mysql make make install cp php.ini-dist /usr/local/lib/php.ini
Make sure the httpd.conf has the following line that will get automatically inserted during the PHP installation process.
LoadModule php5_module modules/libphp5.so
# /usr/local/bin/apache2/apachectl restart
Go to http://local-host/test.php , which will show a detailed information about all the PHP configuration options and PHP modules installed on the system.
Install thelibxml2-devel and zlib-devel as shown below to the fix this issue.
# rpm -ivh /home/downloads/linux-iso/libxml2-devel-2.6.26-2.1.2.0.1.i386.rpm /home/downloads/linux-iso/zlib-devel-1.2.3-3.i386.rpm Preparing... ########################################### [100%] 1:zlib-devel ########################################### [ 50%] 2:libxml2-devel ########################################### [100%]
Error 2: configure: error: Cannot find MySQL header files. While performing the ./configure during PHP installation, you may get the following error:
# ./configure --with-apxs2=/usr/local/apache2/bin/apxs --with-mysql checking for MySQL UNIX socket location... /var/lib/mysql/mysql.sock configure: error: Cannot find MySQL header files under yes. Note that the MySQL client library is not bundled anymore!
42.Install MySQL from source: This is a step-by-step guide to install MySQL from source on UNIX environment.
Object 188
Object 187
Most of the Linux distro comes with MySQL. If you want use MySQL, my recommendation is that you download the latest version of MySQL and install it yourself. Later you can upgrade it to the latest version when it becomes available. In this article, I will explain how to install the latest free community edition of MySQL on Linux platform.
2. Remove the existing default MySQL that came with the Linux distro
Do not perform this on an system where the MySQL database is getting used by some application.
[local-host]# rpm -qa | grep -i mysql mysql-5.0.22-2.1.0.1 mysqlclient10-3.23.58-4.RHEL4.1 [local-host]# rpm -e mysql --nodeps warning: /etc/my.cnf saved as /etc/my.cnf.rpmsave [local-host]# rpm -e mysqlclient10
This will also display the following output and start the MySQL daemon automatically.
PLEASE REMEMBER TO SET A PASSWORD FOR THE MySQL root USER ! To do so, start the server, then issue the following commands: /usr/bin/mysqladmin -u root password 'new-password' /usr/bin/mysqladmin -u root -h medica2 password 'new-password' Alternatively you can run: /usr/bin/mysql_secure_installation which will also give you the option of removing the test databases and anonymous user created by default. This is strongly recommended for production servers. See the manual for more instructions. Please report any problems with the /usr/bin/mysqlbug script! The latest information about MySQL is available at http://www.mysql.com/ Support MySQL by buying support/licenses from http://shop.mysql.com/ Starting MySQL.[ OK ] Giving mysqld 2 seconds to start
Install the Header and Libraries that are part of the MySQL-devel packages.
[local-host]# rpm -ivh MySQL-devel-community-5.1.25-0.rhel5.i386.rpm
Preparing... 1:MySQL-devel-community
Note: When I was compiling PHP with MySQL option from source on the Linux system, it failed with the following error. Installing the MySQL-devel-community package fixed this problem in installing PHP from source.
configure: error: Cannot find MySQL header files under yes. Note that the MySQL client library is not bundled anymore!
The best option is to run the mysql_secure_installation script that will take care of all the typical security related items on the MySQL as shown below. On a high level this does the following items: Change the root password Remove the anonymous user Disallow root login from remote machines Remove the default sample test database
[local-host]# /usr/bin/mysql_secure_installation NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MySQL SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY! In order to log into MySQL to secure it, we'll need the current password for the root user. If you've just installed MySQL, and you haven't set the root password yet, the password will be blank, so you should just press enter here. Enter current password for root (enter for none): OK, successfully used password, moving on... Setting the root password ensures that nobody can log into the MySQL root user without the proper authorisation. You already have a root password set, so you can safely answer 'n'. Change the root password? [Y/n] Y New password: Re-enter new password: Password updated successfully! Reloading privilege tables.. ... Success! By default, a MySQL installation has an anonymous user, allowing anyone to log into MySQL without having to have a user account created for them. This is intended only for testing, and to make the installation go a bit smoother. You should remove them before moving into a production environment. Remove anonymous users? [Y/n] Y ... Success! Normally, root should only be allowed to connect from 'localhost'. This ensures that someone cannot guess at the root password from the network. Disallow root login remotely? [Y/n] Y ... Success! By default, MySQL comes with a database named 'test' that anyone can access. This is also intended only for testing, and should be removed before moving into a production environment. Remove test database and access to it? [Y/n] Y - Dropping test database...
... Success! - Removing privileges on test database... ... Success! Reloading the privilege tables will ensure that all changes made so far will take effect immediately. Reload privilege tables now? [Y/n] Y ... Success! Cleaning up... All done! If you've completed all of the above steps, your MySQL installation should now be secure. Thanks for using MySQL!
Connect to the MySQL database using the root user and make sure the connection is successfull.
[local-host]# mysql -u root -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 13 Server version: 5.1.25-rc-community MySQL Community Server (GPL) Type 'help;' or '\h' for help. Type '\c' to clear the buffer. mysql>
43.Launch Linux clients on windows: If you are using SSH client to connect to Linux server from your Windows laptop, sometimes it may be necessary to launch UI application on the remote Linux server, but to display the UI on the windows laptop. Cygwin can be used to install software on Linux from Windows and launch Linux X client software on Windows.
Object 191
Object 190
If you are using SSH client to connect to Linux server from your Windows laptop, sometimes it may be necessary to launch UI application on the remote Linux server, but to display the UI on the windows laptop. Following are two typical reasons to perform this activity: 1. Install software on Linux from Windows: To launch a UI based installer to install software on remote Linux server from windows laptop. For e.g. A DBA might want to install the Oracle on the Linux server where only the SSH connection to the remote server is available and not the console. 2. Launch Linux X client software on Windows: To launch X Client software (for e.g. xclock) located on your remote Linux server to the Windows laptop. Cygwin can be used to perform the above activities. Following 15 steps explains how to install Cygwin and launch software installers on Linux from Windows. Go to Cygwin and download the setup.exe. Launch the setup.exe on the Windows and follow the steps mentioned below. 1. Welcome Screen. Click next on the Cygwin installation welcome screen.
3. Choose Installation directory. I selected C:\cygwin as shown below. This is the location where the Cygwin software will be installed on the Windows.
4. Select Local Package Install directory. This is the directory where the installation files will be downloaded and stored.
5. Select Connection Type. If you are connected to internet via proxy, enter the information. If not, select Direct Connection.
6. Choose a download site. You can either choose a download site that is closer to you or leave the default selection.
7. Download Progress. This screen will display the progress of the download.
8. Select Packages to install. I recommend that you leave the default selection here.
9. Installation Progress. This screen will display the progress of the installation.
11. Start the Cygwin Bash Shell on Windows. Click on cygwin icon on the desktop (or) Click on Start -> All Programs -> Cygwin -> Cygwin Bash shell, which will display the Cygwin Bash Shell window. 12. Start the X Server on Windows. From the Cygwin Bash Shell, type startx to start the X Server as shown below. Once the X Server is started, leave this window open and do not close it.
13. Xterm window: startx from the above step will open a new xterm window automatically as shown below.
14. SSH to the remote Linux host from the Xterm window as shown below. Please note that you should pass the -Y parameter to ssh. -Y parameter enables trusted X11 forwarding.
jsmith@windows-laptop ~ $ ssh -Y -l jsmith remote-host <This is from the xterm on windows laptop> jsmith@remotehost's password: Warning: No xauth data; using fake authentication data for X11 forwarding. Last login: Thu Jun 12 22:36:04 2008 from 192.168.1.102 /usr/bin/xauth: creating new authority file /home/jsmith/.Xauthority [remote-host]$ xclock & <Note that you are starting xclock on remote linux server> [1] 12593 [remote-host]$
15. xclock on windows laptop. From the Linux host, launch the xclock software as shown above, which will display the xclock on the windows laptop as shown below.
Use the same method explained above to launch any software installer on Linux (for e.g. Oracle database installer) and get it displayed on the Windows laptop.
44.IPCS: IPC allows the processes to communicate with each another. The process can also communicate by having a file accessible to both the processes. Processes can open, and read/write the file, which requires lot of I/O operation that consumes time. This explains different types of IPCS and provides 10 IPCS command examples.
Object 194
Object 193
IPC stands for Inter-process Communication. This technique allows the processes to communicate with each another. Since each process has its own address space and unique user space, how does the process communicate each other? The answer is Kernel, the heart of the Linux operating system that has access to the whole memory. So we can request the kernel to allocate the space which can be used to communicate between processes. The process can also communicate by having a file accessible to both the processes. Processes can open, and read/write the file, which requires lot of I/O operation that consumes time.
owner root
perms 644
used-bytes 0
messages 0
All the IPC facility has unique key and identifier, which is used to identify an IPC facility.
0 1 2
Option -i with -q provides information about a particular message queue. Option -i with -s provides semaphore details. Option -i with -m provides details about a shared memory.
# ipcs -m -l ------ Shared Memory Limits -------max number of segments = 4096 max seg size (kbytes) = 67108864 max total shared memory (kbytes) = 17179869184 min seg size (bytes) = 1
The above command gives the limits for shared memory. -l can be combined with -q and -s to view the limits for message queue and semaphores respectively. Single option -l gives the limits for all three IPC facilities.
# ipcs -l
IPCS Example 7. List Creator and Owner Details for IPC Facility
ipcs -c option lists creator userid and groupid and owner userid and group id. This option can be combined with -m, -s and -q to view the creator details for specific IPC facility.
# ipcs -m -c ------ Shared Memory Segment Creators/Owners -------shmid perms cuid cgid uid 1056800768 660 oracle oinstall oracle 323158020 664 root root root 325713925 666 root root root gid oinstall root root
45.Logical Volume Manager: Using LVM we can create logical partitions that can span across one or more physical hard drives.You can create and manage LVM using vgcreate, lvcreate, and lvextend lvm2 commands as shown here.
How To Create LVM Using vgcreate, lvcreate, and lvextend lvm2 Commands
by Balakrishnan Mariyappan on August 5, 2010
Object 195
Object 197
Object 196
LVM stands for Logical Volume Manager. With LVM, we can create logical partitions that can span across one or more physical hard drives. First, the hard drives are divided into physical volumes, then those physical volumes are combined together to create the volume group and finally the logical volumes are created from volume group. The LVM commands listed in this article are used under Ubuntu Distribution. But, it is the same for other Linux distributions. Before we start, install the lvm2 package as shown below.
$ sudo apt-get intall lvm2
To create a LVM, we need to run through the following steps. Select the physical storage devices for LVM Create the Volume Group from Physical Volumes Create Logical Volumes from Volume Group
Select the Physical Storage Devices for LVM Use pvcreate, pvscan, pvdisplay Commands
In this step, we need to choose the physical volumes that will be used to create the LVM. We can create the physical volumes using pvcreate command as shown below.
$ sudo pvcreate /dev/sda6 /dev/sda7 Physical volume "/dev/sda6" successfully created Physical volume "/dev/sda7" successfully created
As shown above two physical volumes are created /dev/sda6 and /dev/sda7. If the physical volumes are already created, you can view them using the pvscan command as shown below.
$ sudo pvscan PV /dev/sda6 lvm2 [1.86 GB] PV /dev/sda7 lvm2 [1.86 GB] Total: 2 [3.72 GB] / in use: 0 [0 ] / in no VG: 2 [3.72 GB]
You can view the list of physical volumes with attributes like size, physical extent size, total physical extent size, the free space, etc., using pvdisplay command as shown below.
$ sudo pvdisplay --- Physical volume --PV Name /dev/sda6 VG Name PV Size 1.86 GB / not usable 2.12 MB Allocatable yes PE Size (KByte) 4096 Total PE 476 Free PE 456 Allocated PE 20 PV UUID m67TXf-EY6w-6LuX-NNB6-kU4L-wnk8-NjjZfv --- Physical volume --PV Name /dev/sda7 VG Name PV Size 1.86 GB / not usable 2.12 MB Allocatable yes PE Size (KByte) 4096 Total PE 476 Free PE 476 Allocated PE 0 PV UUID b031x0-6rej-BcBu-bE2C-eCXG-jObu-0Boo0x
Note : PE Physical Extents are nothing but equal-sized chunks. The default size of extent is 4MB.
LVM processes the storage in terms of extents. We can also change the extent size (from the default
size 4MB) using -s flag. vgdisplay command lists the created volume groups.
$ sudo vgdisplay --- Volume group --VG Name System ID Format Metadata Areas Metadata Sequence No VG Access VG Status MAX LV Cur LV Open LV Max PV Cur PV Act PV VG Size PE Size Total PE Alloc PE / Size Free PE / Size VG UUID vol_grp1 2 1 lvm2 read/write resizable 0 0 0 0 2 2 3.72 GB 4.00 MB 952 0 / 0 952 / 3.72 GB Kk1ufB-rT15-bSWe-5270-KDfZ-shUX-FUYBvR
Use lvdisplay command as shown below, to view the available logical volumes with its attributes.
$ sudo lvdisplay --- Logical volume --LV Name /dev/vol_grp1/logical_vol1 VG Name vol_grp1 LV UUID ap8sZ2-WqE1-6401-Kupm-DbnO-2P7g-x1HwtQ LV Write Access read/write LV Status available # open 0 LV Size 80.00 MB Current LE 20 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 252:0
After creating the appropriate filesystem on the logical volumes, it becomes ready to use for the storage purpose.
$ sudo mkfs.ext3 /dev/vol_grp1/logical_vol1
LVM resize: Change the size of the logical volumes Use lvextend Command
We can extend the size of the logical volumes after creating it by using lvextend utility as shown
below. The changes the size of the logical volume from 80MB to 100MB.
$ sudo lvextend -L100 /dev/vol_grp1/logical_vol1 Extending logical volume logical_vol1 to 100.00 MB Logical volume logical_vol1 successfully resized
We can also add additional size to a specific logical volume as shown below.
$ sudo lvextend -L+100 /dev/vol_grp1/logical_vol1 Extending logical volume logical_vol1 to 200.00 MB Logical volume logical_vol1 successfully resized
46.15 Tcpdump examples: tcpdump is a network packet analyzer. tcpdump allows us to save the packets that are captured, so that we can use it for future analysis. The saved file can be viewed by the same tcpdump command. We can also use open source software like wireshark to read the tcpdump pcap files.
Object 200
Object 199
tcpdump command is also called as packet analyzer. tcpdump command will work on most flavors of unix operating system. tcpdump allows us to save the packets that are captured, so that we can use it for future analysis. The saved file can be viewed by the same tcpdump command. We can also use open source software like wireshark to read the tcpdump pcap files. In this tcpdump tutorial, let us discuss some practical examples on how to use the tcpdump command.
In this example, tcpdump captured all the packets flows in the interface eth1 and displays in the standard output. Note: Editcap utility is used to select or remove specific packets from dump file and translate them into a given format.
The above tcpdump command captured only 2 packets from interface eth0. Note: Mergecap and TShark: Mergecap is a packet dump combining tool, which will combine multiple dumps into a single dump file. Tshark is a powerful tool to capture network packets, which can be used to analyze the network traffic. It comes with wireshark network analyzer distribution.
-w option writes the packets into a given file. The file extension should be .pcap, which can be read by any network protocol analyzer.
15:01:35.170776 IP 11.154.12.121.ssh > 10.0.19.121.52497: P 23988:24136(148) ack 157 win 113 15:01:35.170894 IP 11.154.12.121.ssh > 10.0.19.121.52497: P 24136:24380(244) ack 157 win 113
You can open the file comm.pcap using any network protocol analyzer tool to debug any potential issues.
15. tcpdump Filter Packets Capture all the packets other than arp and rarp
In tcpdump command, you can give and, or and not condition to filter the packets accordingly.
$ tcpdump -i eth0 not arp and not rarp 20:33:15.479278 IP resolver.lell.net.domain > valh4.lell.net.64639: 26929 1/0/0 (73) 20:33:15.479890 IP valh4.lell.net.16053 > resolver.lell.net.domain: 56556+ PTR? 255.107.154.15.in-addr.arpa. (45) 20:33:15.480197 IP valh4.lell.net.ssh > zz.domain.innetbcp.net.63897: P 540:1504(964) ack 1 win 96 20:33:15.487118 IP zz.domain.innetbcp.net.63897 > valh4.lell.net.ssh: . ack 540 win 16486 20:33:15.668599 IP 10.0.0.0 > all-systems.mcast.net: igmp query v3 [max resp time
47.Manage partition using fdisk: Using fdisk you can create a maximum of four primary partition, delete an existing partition, or change existing partition. Using fidsk you are allowed to create a maximum of four primary partition, and any number of logical partitions, based on the size of the disk.
On Linux distributions, fdisk is the best tool to manage disk partitions. fdisk is a text based utility. Using fdisk you can create a new partition, delete an existing partition, or change existing partition. Using fidsk you are allowed to create a maximum of four primary partition, and any number of logical partitions, based on the size of the disk. Keep in mind that any single partition requires a minimum size of 40MB. In this article, let us review how to use fdisk command using practical examples. Warning: Dont delete, modify, or add partition, if you dont know what you are doing. You will lose your data!
83 82 b
The above will list partitions from all the connected hard disks. When you have more than one disk on the system, the partitions list are ordered by the devices /dev name. For example, /dev/sda, /dev/sdb, /dev/sdc and so on.
Command (m for help): p Disk /dev/sda: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xf6edf6ed Device Boot /dev/sda1 /dev/sda2 /dev/sda3 /dev/sda4 /dev/sda5 * /dev/sda6 /dev/sda7 /dev/sda8 /dev/sda9 Start 1 1960 5284 6529 1960 2662 2905 3148 3265 End 1959 5283 6528 9729 2661 2904 3147 3264 5283 Blocks 15735636 26700030 10000462+ 25712032+ 5638752 1951866 1951866 939771 16217586 Id c f 7 c 83 83 83 82 b System W95 FAT32 (LBA) W95 Ext'd (LBA) HPFS/NTFS W95 FAT32 (LBA) Linux Linux Linux Linux swap / Solaris W95 FAT32
Command (m for help): d Partition number (1-9): 8 Command (m for help): d Partition number (1-8): 7 Command (m for help): d Partition number (1-7): 6 Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. WARNING: Re-reading the partition table failed with error 16: Device or resource busy. The kernel still uses the old table. The new table will be used at the next reboot or after you run partprobe(8) or kpartx(8) Syncing disks.
4. Create a New Disk Partition with Specific Size Using fdisk Command n
Once youve deleted all the existing partitions, you can create a new partition using all available space as shown below.
# fdisk /dev/sda The number of cylinders for this disk is set to 9729. There is nothing wrong with that, but this is larger than 1024, and could in certain setups cause problems with: 1) software that runs at boot time (e.g., old versions of LILO) 2) booting and partitioning software from other OSs (e.g., DOS FDISK, OS/2 FDISK) Command (m for help): n First cylinder (2662-5283, default 2662): Using default value 2662 Last cylinder, +cylinders or +size{K,M,G} (2662-3264, default 3264): Using default value 3264
In the above example, fdisk n command is used to create new partition with the specific size. While creating a new partition, it expects following two inputs.
Starting cylinder number of the partition to be create (First cylinder). Size of the partition (or) the last cylinder number (Last cylinder, +cylinders or +size ). Please keep in mind that you should issue the fdisk write command (w) after any modifications.
Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. WARNING: Re-reading the partition table failed with error 16: Device or resource busy. The kernel still uses the old table. The new table will be used at the next reboot or after you run partprobe(8) or kpartx(8) Syncing disks.
After the partition is created, format it using the mkfs command as shown below.
# mkfs.ext3 /dev/sda7
Command (m for help): a Partition number (1-7): 5 Command (m for help): p Disk /dev/sda: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xf6edf6ed Device Boot /dev/sda1 /dev/sda2 /dev/sda3 /dev/sda4 /dev/sda5 /dev/sda6 /dev/sda7 Start 1 1960 5284 6529 1960 3265 2662 End 1959 5283 6528 9729 2661 5283 3264 Blocks 15735636 26700030 10000462+ 25712032+ 5638752 16217586 4843566 Id c f 7 c 83 b 83 System W95 FAT32 (LBA) W95 Ext'd (LBA) HPFS/NTFS W95 FAT32 (LBA) Linux W95 FAT32 Linux
Partition table entries are not in disk order Command (m for help):
/dev/sda7
2662
3264
4843566
83
Linux
Partition table entries are not in disk order Command (m for help): x Expert command (m for help): f Done. Expert command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. WARNING: Re-reading the partition table failed with error 16: Device or resource busy. The kernel still uses the old table. The new table will be used at the next reboot or after you run partprobe(8) or kpartx(8) Syncing disks.
Once the partition table order is fixed, youll not get the Partition table entries are not in disk order error message anymore.
# fdisk -l Disk /dev/sda: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xf6edf6ed Device Boot /dev/sda1 /dev/sda2 /dev/sda3 /dev/sda4 /dev/sda5 * /dev/sda6 /dev/sda7 Start 1 1960 5284 6529 1960 2662 3265 End 1959 5283 6528 9729 2661 3264 5283 Blocks 15735636 26700030 10000462+ 25712032+ 5638752 4843566 16217586 Id c f 7 c 83 83 b System W95 FAT32 (LBA) W95 Ext'd (LBA) HPFS/NTFS W95 FAT32 (LBA) Linux Linux W95 FAT32
48.VMWare fundamentals: At some point every sysadmin should deal with virtualization. VMWare is a very popular choise to virtualize your server environment. This article will provide the fundamental information for you to get a jumpstart on VMWare.
We are starting a new series of articles on VMware that will help you install, configure and maintain VMware environments. In this first part of the VMware series, let us discuss the fundamental concepts of virtualization and review the VMware virtualization implementation options. Following are few reasons why you might want to think about virtualization for your environment.
Run multiple operation systems on one server. For example, instead of having developmentserver and QA-server, you can run both development and QA on a single server. You can have multiple flavours of OS on one server. For example, you can run 2 Linux OS, 1 Windows OS on a single server. Multiple OS running on the server shares the hardware resources among them. For example, CPU, RAM, network devices are shared among development-server and QA-server running on the same hardware. Allocate hardware resources to different applications based on the utilization. For example, if you have 8GB of RAM on the server, you can assign less RAM to one virtual machine (2GB to development-server) and more RAM (6GB to QA-server) to another virtual machine that is running on that server High availability and business continuity. If VMware is implemented properly, you can migrate a virtual machine from one server to another server quickly without any downtime. This reduces the operational cost and power consumption. For example, instead of buying and running two servers, you will be using only one server and run both development and QA on it. On a high level, there are two ways for you to get started on the virtualization using VMware products. Both of these are available for free from VMware.
1. VMware Server
VMware Server runs on top of an existing host operating system (either Linux or Windows). This is a good option to get started, as you can use any of the existing hardware along with its OS. VMware server also support 64-bit host and guest operating system. You also get VMware Infrastructure web access management interface and Virtual Machine console.
2. VMware ESXi
VMware ESXi is based on the hypervisor architecture. VMware ESXi runs directly on the hardware
without the need of any host operating system, which makes is extremely effective in terms of performance. This is the best option to implement VMware for production usage.
Fig: Virtual Machine running on top of VMware ESXi Following are some of the key features of VMware ESXi: Memory compression, over commitment and deduplication. built-in high available with NIC teaming and HBA multipathing. Intelligent CPU virtualization Highly compatible with various servers hardware, storage and OS. Advanced security with VMSafe, VMKernel protection and encryption. Easy management using vsphere client, vCenter server and command line interface
49.Rotate the logs automatically: Manging log files is an importat part of sysadmin life. logrotate make it easy by allowing you to setup automatica log rotation based on several configurations. Using logrotate you can also configure it to execute custom shell scripts immediately after log rotation.
Managing log files effectively is an essential task for Linux sysadmin. In this article, let us discuss how to perform following log file operations using UNIX logrotate utility. Rotate the log file when file size reaches a specific size Continue to write the log information to the newly created file after rotating the old log file Compress the rotated log files Specify compression option for the rotated log files Rotate the old log files with the date in the filename Execute custom shell scripts immediately after log rotation Remove older rotated log files
/etc/logrotate.conf Log rotation configuration for all the log files are specified in this file.
$ cat /etc/logrotate.conf weekly rotate 4 create include /etc/logrotate.d /var/log/wtmp { monthly minsize 1M create 0664 root utmp
rotate 1
/etc/logrotate.d When individual packages are installed on the system, they drop the log rotation configuration information in this directory. For example, yum log rotate configuration information is shown below.
$ cat /etc/logrotate.d/yum /var/log/yum.log { missingok notifempty size 30k yearly create 0600 root root }
2. Logrotate size option: Rotate the log file when file size reaches a specific limit
If you want to rotate a log file (for example, /tmp/output.log) for every 1KB, create the logrotate.conf as shown below.
$ cat logrotate.conf /tmp/output.log { size 1k create 700 bala bala rotate 4 }
This logrotate configuration has following three options: size 1k logrotate runs only if the filesize is equal to (or greater than) this size. create rotate the original file and create the new file with specified permission, user and group. rotate limits the number of log file rotation. So, this would keep only the recent 4 rotated log files. Before the logrotation, following is the size of the output.log:
$ ls -l /tmp/output.log -rw-r--r-- 1 bala bala 25868 2010-06-09 21:19 /tmp/output.log
Now, run the logrotate command as shown below. Option -s specifies the filename to write the logrotate status.
$ logrotate -s /var/log/logstatus logrotate.conf
Note : whenever you need of log rotation for some files, prepare the logrotate configuration and run the logroate command manually. After the logrotation, following is the size of the output.log:
$ ls -l /tmp/output* -rw-r--r-- 1 bala bala 25868 2010-06-09 21:20 output.log.1 -rwx------ 1 bala bala 0 2010-06-09 21:20 output.log
Eventually this will keep following setup of rotated log files. output.log.4. output.log.3 output.log.2 output.log.1
output.log Please remember that after the log rotation, the log file corresponds to the service would still point to rotated file (output.log.1) and keeps on writing in it. You can use the above method, if you want to rotate the apache access_log or error_log every 5 MB. Ideally, you should modify the /etc/logrotate.conf to specify the logrotate information for a specific log file. Also, if you are having huge log files, you can use: 10 Awesome Examples for Viewing Huge Log Files in Unix
3. Logrotate copytruncate option: Continue to write the log information in the newly created file after rotating the old log file.
$ cat logrotate.conf /tmp/output.log { size 1k copytruncate rotate 4 }
copytruncate instruct logrotate to creates the copy of the original file (i.e rotate the original log file) and truncates the original file to zero byte size. This helps the respective service that belongs to that log file can write to the proper file. While manipulating log files, you might find the sed substitute, sed delete tips helpful.
5. Logrotate dateext option: Rotate the old log file with date in the log filename
$ cat logrotate.conf /tmp/output.log { size 1k copytruncate create 700 bala bala dateext rotate 4 compress }
After the above configuration, youll notice the date in the rotated log file as shown below.
$ ls -lrt /tmp/output* -rw-r--r-- 1 bala bala 8980 2010-06-09 22:10 output.log-20100609.gz -rwxrwxrwx 1 bala bala 0 2010-06-09 22:11 output.log
This would work only once in a day. Because when it tries to rotate next time on the same day, earlier rotated file will be having the same filename. So, the logrotate wont be successful after the first run on the same day. Typically you might use tail -f to view the output of the log file in realtime. You can even combine multiple tail -f output and display it on single terminal.
6. Logrotate monthly, daily, weekly option: Rotate the log file weekly/daily/monthly
For doing the rotation monthly once,
$ cat logrotate.conf /tmp/output.log { monthly copytruncate rotate 4 compress }
Add the weekly keyword as shown below for weekly log rotation.
$ cat logrotate.conf /tmp/output.log { weekly copytruncate rotate 4 compress }
Add the daily keyword as shown below for every day log rotation. You can also rotate logs hourly.
$ cat logrotate.conf /tmp/output.log { daily copytruncate rotate 4 compress }
7. Logrotate postrotate endscript option: Run custom shell scripts immediately after log rotation
Logrotate allows you to run your own custom shell scripts after it completes the log file rotation. The following configuration indicates that it will execute myscript.sh after the logrotation.
$ cat logrotate.conf /tmp/output.log { size 1k copytruncate rotate 4 compress postrotate /home/bala/myscript.sh endscript
9. Logrotate missingok option: Dont return error if the log file is missing
You can ignore the error message when the actual file is not available by using this option as shown below.
$ cat logrotate.conf /tmp/output.log { size 1k copytruncate rotate 4 compress missingok }
10. Logrotate compresscmd and compressext option: Sspecify compression command for the log file rotation
$ cat logrotate.conf /tmp/output.log { size 1k copytruncate create compress compresscmd /bin/bzip2 compressext .bz2 rotate 4 }
Following compression options are specified above: compress Indicates that compression should be done. compresscmd Specify what type of compression command should be used. For example: /bin/bzip2 compressext Specify the extension on the rotated log file. Without this option, the rotated file would have the default extension as .gz. So, if you use bzip2 compressioncmd, specify the extension as .bz2 as shown in the above example. 50.Passwordless SSH login setup: Using ssh-keygen and ssh-copy-id you can setup passwordless login to remote Linux server. ssh-keygen creates the public and private keys. ssh-copy-id copies the local-hosts public key to the remote-hosts authorized_keys file.
3 Steps to Perform SSH Login Without Password Using ssh-keygen & ssh-copy-id
by Ramesh Natarajan on November 20, 2008
Object 210 Object 212
Object 211
You can login to a remote Linux server without entering password in 3 simple steps using sskykeygen and ssh-copy-id as explained in this article. ssh-keygen creates the public and private keys. sshcopy-id copies the local-hosts public key to the remote-hosts authorized_keys file. ssh-copy-id also assigns proper permission to the remote-hosts home, ~/.ssh, and ~/.ssh/authorized_keys. This article also explains 3 minor annoyances of using ssh-copy-id and how to use ssh-copy-id along with ssh-agent.
The above 3 simple steps should get the job done in most cases. We also discussed earlier in detail about performing SSH and SCP from openSSH to openSSH without entering password. If you are using SSH2, we discussed earlier about performing SSH and SCP without password from SSH2 to SSH2 , from OpenSSH to SSH2 and from SSH2 to OpenSSH.
If you have loaded keys to the ssh-agent using the ssh-add, then ssh-copy-id will get the keys from the ssh-agent to copy to the remote-host. i.e, it copies the keys provided by ssh-add -L command to the remote-host, when you dont pass option -i to the ssh-copy-id.
jsmith@local-host$ ssh-agent $SHELL jsmith@local-host$ ssh-add -L The agent has no identities. jsmith@local-host$ ssh-add Identity added: /home/jsmith/.ssh/id_rsa (/home/jsmith/.ssh/id_rsa) jsmith@local-host$ ssh-add -L ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAsJIEILxftj8aSxMa3d8t6JvM79DyBV aHrtPhTYpq7kIEMUNzApnyxsHpH1tQ/Ow== /home/jsmith/.ssh/id_rsa jsmith@local-host$ ssh-copy-id -i remote-host jsmith@remote-host's password: Now try logging into the machine, with "ssh 'remote-host'", and check in: .ssh/authorized_keys to make sure we haven't added extra keys that you weren't expecting. [Note: This has added the key displayed by ssh-add -L]
Object 215
Object 214
When you are using Linux command line frequently, using the history effectively can be a major productivity boost. In fact, once you have mastered the 15 examples that Ive provided here, youll find using command line more enjoyable and fun.
Sometimes you want to edit a command from history before executing it. For e.g. you can search for httpd, which will display service httpd stop from the command history, select this command and change the stop to start and re-execute it again as shown below.
# [Press Ctrl+R from the command prompt, which will display the reverse-i-search prompt] (reverse-i-search)`httpd': service httpd stop
[Note: Press either left arrow or right arrow key when you see your command, which will display the command for you to edit, before executing it] # service httpd start
If you have a good reason to change the name of the history file, please share it with me, as Im interested in finding out how you are using this feature.
can see lot of junior sysadmins getting excited about this, as they can hide a command from the history. It is good to understand how ignorespace works. But, as a best practice, dont hide purposefully anything from history.
# export HISTCONTROL=ignorespace # ls -ltr # pwd # service httpd stop [Note that there is a space at the beginning of service, to ignore this command from history] # history | tail -3 67 ls -ltr 68 pwd 69 history | tail -3
In the example below, the !^ next to the vi command gets the first argument from the previous command (i.e cp command) to the current command (i.e vi command).
# cp anaconda-ks.cfg anaconda-ks.cfg.bak anaconda-ks.cfg # vi !^ vi anaconda-ks.cfg
In the example below, !cp:$ searches for the previous command in history that starts with cp and takes the last argument (in this case, which is also the second argument as shown above) of cp and substitutes it for the ls -l command as shown below.
# ls -l !cp:$ ls -l /really/a/very/long/path/long-filename.txt
# history | tail -3 79 export HISTIGNORE="pwd:ls:ls -ltr:" 80 service httpd stop 81 history [Note that history did not record pwd, ls and ls -ltr]
Recommended Reading
Bash 101 Hacks, by Ramesh Natarajan. I spend most of my time on Linux environment. So, naturally Im a huge fan of Bash command line and shell scripting. 15 years back, when I was working on different flavors of *nix, I used to write lot of code on C shell and Korn shell. Later years, when I started working on Linux as system administrator, I pretty much automated every possible task using Bash shell scripting. Based on my Bash experience, Ive written Bash 101 Hacks eBook that contains 101 practical examples on both Bash command line and shell scripting. If youve been thinking about mastering Bash, do yourself a favor and read this book, which will help you take control of your Bash command line and shell scripting.
Object 217
ls Unix users and sysadmins cannot live without this two letter command. Whether you use it 10 times a day or 100 times a day, knowing the power of ls command can make your command line journey enjoyable. In this article, let us review 15 practical examples of the mighty ls command.
1st Character File Type: First character specifies the type of the file. In the example above the hyphen (-) in the 1st character indicates that this is a normal file.
Following are the possible file type options in the 1st character of the ls -l output. Field Explanation - normal file d directory s socket file l link file Field 1 File Permissions: Next 9 character specifies the files permission. Each 3 characters refers to the read, write, execute permissions for user, group and world In this example, -rw-r indicates read-write permission for user, read permission for group, and no permission for others. Field 2 Number of links: Second field specifies the number of links for that file. In this example, 1 indicates only one link to this file. Field 3 Owner: Third field specifies owner of the file. In this example, this file is owned by username ramesh. Field 4 Group: Fourth field specifies the group of the file. In this example, this file belongs to team-dev group. Field 5 Size: Fifth field specifies the size of file. In this example, 9275204 indicates the file size. Field 6 Last modified date & time: Sixth field specifies the date and time of the last modification of the file. In this example, Jun 13 15:27 specifies the last modification time of the file. Field 7 File name: The last field is the name of the file. In this example, the file name is mthesaur.txt.gz.
7. Order Files Based on Last Modified Time (In Reverse Order) Using ls -ltr
To sort the file names in the last modification time in reverse order. This will be showing the last edited file in the last line which will be handy when the listing goes beyond a page. This is my default ls usage. Anytime I do ls, I always use ls -ltr as I find this very convenient.
$ ls -ltr total 76 drwxr-xr-x 15 root drwx-----2 root lrwxrwxrwx 1 root drwxr-xr-x 2 root drwxr-xr-x 12 root drwxr-xr-x 13 root drwxr-xr-x 13 root drwxr-xr-x 121 root drwxrwxrwt 14 root root 4096 Jul 2 2008 var root 16384 May 17 20:29 lost+found root 11 May 17 20:29 cdrom -> media/cdrom root 4096 May 17 21:21 sbin root 4096 Jun 18 08:31 home root 4096 Jun 20 23:12 root root 13780 Jun 22 07:04 dev root 4096 Jun 22 07:05 etc root 4096 Jun 22 07:36 tmp
It will show all the files including the . (current directory) and .. (parent directory). To show the hidden files, but not the . (current directory) and .. (parent directory), use option -A.
$ ls -A Debian-Info.txt Fedora-Info.txt CentOS-Info.txt Red-Hat-Info.txt .bash_history SUSE-Info.txt .bash_logout .lftp .bash_profile libiconv-1.11.tar.tar .bashrc libssh2-0.12-1.2.el4.rf.i386.rpm [Note: . and .. are not displayed here]
To show all the files recursively, use -R option. When you do this from /, it shows all the unhidden files in the whole file system recursively.