0% found this document useful (0 votes)
171 views

17 Linux Interview Questions and Answers Related Articles

This document discusses Linux and SSH interview questions and answers. It includes 17 Linux questions covering topics like user permissions, firewalls, boot process, packages, LVM, log management, troubleshooting SSH connections, and processes. It also includes 20 SSH questions covering the SSH protocol, encryption, port forwarding, authentication methods, and SSH troubleshooting. Additional related articles discuss Linux self-assessment questions, the ps command, and Nagios administration interview questions.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
171 views

17 Linux Interview Questions and Answers Related Articles

This document discusses Linux and SSH interview questions and answers. It includes 17 Linux questions covering topics like user permissions, firewalls, boot process, packages, LVM, log management, troubleshooting SSH connections, and processes. It also includes 20 SSH questions covering the SSH protocol, encryption, port forwarding, authentication methods, and SSH troubleshooting. Additional related articles discuss Linux self-assessment questions, the ps command, and Nagios administration interview questions.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 62

LINUX

17 Linux Interview Questions and


Answers | ArkIT
BY ARK · APRIL 18, 2020
In this article we are going to 17 Linux interview questions and answers asked by
one of the top MNC company in interview. for L1 Linux Engineer position.
1. There are three users Ravi, Shekhar & srikanth you have to give access of
ftp, telnet, & ssh to Ravi user, Only ssh to Shekhar user, Only telnet to
Srikanth user. How will you configure.
2. What is Firewall?
3. What is SELinux, SELinux context?
4. If you have scheduled a task to be run at 2PM but it has not been executed
what will you do?
5. Tell me boot process?
6. How will you install packages?
7. How will you reduce LVM?
8. How will you extend LVM?
9. Which files have SUID by default. what is SUID?
10. Tell me about log management?
11. If a server is not able to SSH to a client, how will you troubleshoot?
12. States in Processes?
13. Tell me about top & ps
14. How will you check dependencies of packages
15. What is DNS?
16. What is DHCP?
17. Difference between RPM & YUM?

17 Linux Interview Questions and


Answers
Related Articles
Linux self assessment exam
20 Linux SSH-Interview questions and answers 
Nagios Q and A
Play List

Linux Self Assessment Interview


Questions and Answers
BY ARK · APRIL 23, 2018
This is the very basic Linux self assessment Interview questions and answers. The
thing is do not look anywhere in internet or other sources for answers, answer
genuinely comment your answers.

Linux Self Assessment Interview


Questions
A. What happens if I type TAB-TAB?
1. Jumps to the end of the command line
2. Auto completes commands
3. Jumps to the beginning of the command line
4. same as ‘logout’ or ‘exit’
5. Lists available commands
6. Auto completes path
B. Which command(s) is/are used to get help about a command in Linux?
1. info
2. man
3. None of These
4. Both A and B
C.  Which command is used to get the kernel version in Linux?
1. uname -r
2. kernel
3. uname -s
4. uname -n
D. Which command(s) is/are used to remove directory in Linux?
1. rm -r
2. rmdir
3. del
4. Both A and B
5. None of the above
E. Which command is used to list all the files in your current directory
(including hidden)?
1. ls -i
2. ls -a
3. ls -l
4. ls -t
F. Check mark all commands used to create files in Linux?
1. echo > file.txt
2. touch file.txt
3. cat > file.txt
4. tee file.txt
G. In Linux everything stored as a…
1. file
2. directory
3. executable
4. None of the above
H. Which combination of keys is used to exit from terminal?
1. CTRL + D
2. CTRL + T
3. CTRL + C
4. CTRL + A
I. What is the UID of the root user?
1. 100
2. 0
3. 99
4. 256
J. Explain the following command: dd if=/dev/random of=/dev/sda2 bs=1M 
1. Writes the contents of the second hard drive to a
random device using 1MB chunks
2. Using random data to create a partition on your hard
drive
3. Writes random data to the entire local drive using
1MB chunks
4. Writes random data to the second partition of the
local drive using 1MB chunks
5. I don’t know
6. Blanks the entire hard drive in your system

Related Articles
100 Linux Commands Video Tutorial
ps command in detailed

How You Know You’re Doing ps


Command Linux The Right Way –
Video
BY ARK · PUBLISHED DECEMBER 22, 2016 · UPDATED JANUARY 26, 2017
In this article we are going to see ps command Linux as video session. ps
command displays report a snapshot of the current processes. ps displays
information about a selection of the active processes.  If you want a repetitive
update of the selection and the displayed information, use top instead.

ps command Linux
To see every process on the system using standard syntax

[root@ArkITShell ~]# ps -e
[root@ArkITShell ~]# ps -ef

[root@ArkITShell ~]# ps -eF

[root@ArkITShell ~]# ps -ely

See every process on the system using BSD syntax

[root@ArkITShell ~]# ps ax

[root@ArkITShell ~]# ps axu

Print a process tree

[root@ArkITShell ~]# ps -ejH

[root@ArkITShell ~]# ps axjf

Get info about threads

[root@ArkITShell ~]# ps -eLf

[root@ArkITShell ~]# ps axms

Security info
[root@ArkITShell ~]# ps -eo euser,ruser,suser,fuser,f,comm,label

[root@ArkITShell ~]# ps axZ

[root@ArkITShell ~]# ps -eM

To see every process running as root (real & effective ID) in user format:

[root@ArkITShell ~]# ps -U root -u root u

To see every process with a user-defined format

[root@ArkITShell ~]# ps -eo


pid,tid,class,rtprio,ni,pri,psr,pcpu,stat,wchan:14,comm

[root@ArkITShell ~]# ps axo stat,euid,ruid,tty,tpgid,sess,pgrp,ppid,pid,pcpu,comm

[root@ArkITShell ~]# ps -Ao pid,tt,user,fname,tmout,f,wchan

Print only the process IDs of syslogd

[root@ArkITShell ~]# ps -C syslogd -o pid=

Print only the name of PID 42

[root@ArkITShell ~]# ps -q 42 -o comm=


That’s about ps command in Linux. 

Related Articles
analyze Linux system performance
20 ssh interview questions and answers
Linux directory structure changed in RHEL7 – FHS file hierarchy standard

20 SSH Secure Shell Linux Interview


Questions and Answers
BY ARK · PUBLISHED DECEMBER 17, 2016 · UPDATED DECEMBER 17, 2016
In most of the Interviews it’s an common questions they ask is about SSH (Secure
Shell) because in regular day to day tasks they required to use SSH. Usage of SSH
protocol is high then employee should know about SSH before they are going to
use that’s why interviewer will ask at least one question from these 20 SSH Secure
Shell Linux Interview Questions and Answers.

20 SSH Secure Shell Linux Interview


Questions and Answers
Q1: What is SSH protocol.?
Ans: SSH, or secure shell, is a secure protocol and the most common way of safely
administering remote servers. Because it encrypt data while transferring from one
host to another host throughout network.
Q2: What is the default port for SSH and configuration file path.?
Ans: Default port number 22. Configuration file path /etc/ssh/sshd_config
Q3: What are different data manipulation (Encryption) techniques supported
by SSH ..?
Ans: SSH supports Symmetric and Asymmetric encryption methods
Q4: How to change default SSH port number .?
Ans: We can change the Default port by editing it’s configuration file
/etc/ssh/sshd_config & change Port 22 to other port number. After change restart
sshd service to effect changes. If your firewall and SELinux is enabled then write
the rule in firewall for new port number otherwise clients can’t connect.
Q5: How does port forwarding works in SSH ..?
Ans: Client is going to connect using different port number ( in this case 2048 )
other than default, but server ssh service will respond on default port. We have to
use ssh -p 2048 from client machine.
ssh server port forwarding using firewall redirection

Q6: How to disable root login for ssh server.?


Ans: Disabling the root login for ssh is very secure way. we can do that by editing
ssh configuration file /etc/ssh/sshd_config then change
‘PermitrootLogin no‘

#LoginGraceTime 2m

PermitRootLogin no

#StrictModes yes

#MaxAuthTries 6

#MaxSessions 10

Q7: How to Enable only Key based authentication..?


Ans: This feature provides more/high security because any user can’t login without
SSH key.
Note: We have to disable user login to SSH.

#RSAAuthentication yes

#PubkeyAuthentication yes
Q8: Is it possible to Login to remote SSH server without password.?
Ans: Yes, we can do by setting up Key based authentication (password less
authentication setup)

Password less authentication to run scripts an remote server – Linux

Q9: What is different between Telnet & SSH.?


Telnet                                                                      SSH
1. Data goes on network as plan text.                Data will be encrypted using key
pair (public key and private key)
2. Default port 23                                                   Default Port 22
3. Bandwidth usage is less compare to SSH      Bandwidth is high compared to
telnet 20 ssh
Q10. How to Limit SSH access to specific subnet.?
Limiting SSH access to specific subnet will gives more secure environment, other
than given subnet network can’t access SSH server. Edit sshd_config file and add
subnet as mentioned 20 ssh

AddressFamily 192.168.4.0/24

Q11: How to restrict SSH server to use only protocol version 2 Or 1..?
Edit default configuration file and un-comment below shown line and restart
service

# The default requires explicit activation of protocol 1

#Protocol 2

Q12: What do you mean by SSH cipher.? Tell me different types of ciphers..?
Cipher is an algorithm to perform encryption and decryption. 
Types of cipher supported by SSH are 
1. 3des
2. blowfish
3. des
Q13: How do you access GUI using SSH connection ..?
SSH will also support of transferring X11 forwarding, we have to use options calle
-XY to open server GUI app from client.
 Q14: What the best procedure to troubleshoot SSH connection error .?
To Enable Debugging in ssh command use -v option which gives more appropriate
and deep analysis. Secure Shell Linux Interview Questions
Q15: How to Check ssh version .?

[root@ArkIT-Serv ~]# ssh -V

OpenSSH_6.6.1p1, OpenSSL 1.0.1e-fips 11 Feb 2013

Q16: How to Connect to remote IPv6 enabled server ..?


It’s so simple to connect ssh using IPv6 version 

ssh -6 [email protected]

Q17: What is the procedure to log SSH errors to separate file .?


Using option -E log_file it will send an standard errors to specified file
Q18: How many types of Key types SSH supports..?
 RSA
 DSA
 ECDSA
 ED25519
Q19: Where SSH will store its trusted ssh client keys .?
By default when ever your trying to connect to remote SSH host for the first time it
will ask you to confirm Yes/No as soon as we say yes, it will copy public key pair
to ./ssh/known_hosts
Q20: What is the role of authorized_keys file .?
Authorized_keys file stores all client keys to provide password less authentication. 

Nagios Admin Interview Questions


and Answers
BY ANKAM RAVI KUMAR · OCTOBER 17, 2015

Nagios Admin Interview Questions


and Answers
1.What is Nagios and how it Works ?
Ans:   Nagios is an open source System and Network Monitoring application. Nagios
runs on a server, usually as a daemon or service. Nagios periodically run plugins to
monitor clients, if it found anything warning and critical it will send an alerts via
Email OR SMS as per the configuration.
The Nagios daemon behaves like a scheduler that runs certain scripts at certain
moments. It stores the results of those scripts and will run other scripts if these results
change.

2. what are ports numbers Nagios will use to


monitor clients..?
Ans: Port numbers are 5666, 5667 and 5668
3. Explain Main Configuration file and its
location?
Ans:
1. Resource File : It is used to store sensitive information like username,
passwords with out making them available to the CGIs. Default
path: /usr/local/nagios/etc/resource.cfg
2. Object Definition Files: It is the location were you define all you want to
monitor and how you want to monitor. It is used to define hosts, services,
hostgroups, contacts, contact groups, commands, etc.. Default
Path: /usr/local/nagios/etc/objects/
3. CGI Configuration File : The CGI configuration file contains a number of
directives that affect the operation of the CGIs. It also contains a reference the
main configuration file, so the CGIs know how you’ve configured Nagios and
where your object definitions are stored. Default Path: /usr/local/nagios/sbin/
4. Nagios administrator is adding 100+ clients in
monitoring but he don’t want to add every .cfg file
entry in nagios.cfg file he want to enable a
directory path. How can he configure directory for
all configuration files..?
Ans: He can able to achieve the above scenario by adding the directory path in
nagios.cfg file, in line number 54 we have to add below line.
54  cfg_dir=/usr/local/nagios/etc/objects/monitor
5. Explain Nagios files and its location?
Ans:
1. The main configuration file is usually named nagios.cfg and located in the
/usr/local/nagios/etc/ directory default.
2. Object Configuration File : This directive is used to specify an object
configuration file containing object definitions that Nagios should use for
monitoring.
cfg_file=/usr/local/nagios/etc/hosts.cfg
cfg_file=/usr/local/nagios/etc/services.cfg
cfg_file=/usr/local/nagios/etc/commands.cfg
3. Object Configuration Directory :This directive is used to specify a directory
which contains object configuration files that Nagios should use for monitoring.
cfg_dir=/usr/local/nagios/etc/commands
cfg_dir=/usr/local/nagios/etc/services
cfg_dir=/usr/local/nagios/etc/hosts
4. Object Cache File :This directive is used to specify a file in which a cached
copy of object definitions should be stored.
line number 66 object_cache_file=/usr/local/nagios/var/objects.cache
5. Precached Object File: Line
Number 82 precached_object_file=/usr/local/nagios/var/objects.precache Default
This is used to specify an optional resource file that can contain $USERn$ macro
definitions. $USERn$ macros are useful for storing usernames, passwords, and
items commonly used in command definitions.
6. Temp File : temp_path=/tmp
This is a directory that Nagios can use as scratch space for creating temporary
files used during the monitoring process. You should run tmpwatch, or a similiar
utility, on this directory occasionally to delete files older than 24 hours.
7. Status File :  Line Number 105 status_file=/usr/local/nagios/var/status.dat
This is the file that Nagios uses to store the current status, comment, and
downtime information. This file is used by the CGIs so that current monitoring
status can be reported via a web interface. The CGIs must have read access to this
file in order to function properly. This file is deleted every time Nagios stops and
recreated when it starts.
8. Log Archive Path :  Line
Number 245 log_archive_path=/usr/local/nagios/var/archives/
This is the directory where Nagios should place log files that have been rotated.
This option is ignored if you choose to not use the log rotation functionality.
9. External Command
File :  command_file=/usr/local/nagios/var/rw/nagios.cmd
This is the file that Nagios will check for external commands to process. The
command CGI writes commands to this file. The external command file is
implemented as a named pipe (FIFO), which is created when Nagios starts and
removed when it shuts down. If the file exists when Nagios starts, the Nagios
process will terminate with an error message. Always keep read only permission
to submit the commands from authorized users only.
10. Lock File :  lock_file=/tmp/nagios.lock
This option specifies the location of the lock file that Nagios should create when it
runs as a daemon (when started with the -d command line argument). This file
contains the process id (PID) number of the running Nagios process.
11. State Retention File:  state_retention_file=/usr/local/nagios/var/retention.dat
This is the file that Nagios will use for storing status, downtime, and comment
information before it shuts down. When Nagios is restarted it will use the
information stored in this file for setting the initial states of services and hosts
before it starts monitoring anything. In order to make Nagios retain state
information between program restarts, you must enable
the retain_state_information option.
12. Check Result Path :    check_result_path=/var/spool/nagios/checkresults
This options determines which directory Nagios will use to temporarily store host
and service check results before they are processed.
13. Host Performance Data
File :     host_perfdata_file=/usr/local/nagios/var/host-perfdata.da
This option allows you to specify a file to which host performance data will be
written after every host check. Data will be written to the performance file as
specified by the host_perfdata_file_template option. Performance data is only
written to this file if the process_performance_data option is enabled globally and
if the process_perf_data directive in the host definition is enabled.
14. Service Performance Data
File:   service_perfdata_file=/usr/local/nagios/var/service-perfdata.dat
This option allows you to specify a file to which service performance data will be
written after every service check. Data will be written to the performance file as
specified by the service_perfdata_file_template option. Performance data is only
written to this file if the process_performance_data option is enabled globally and
if the process_perf_data directive in the service definition is enabled
15. Debug File :   debug_file=/usr/local/nagios/var/nagios.debug
This option determines where Nagios should write debugging information. What
(if any) information is written is determined by the debug_level and
debug_verbosity options. You can have Nagios automatically rotate the debug file
when it reaches a certain size by using the max_debug_file_size option.
6. Explain Host and Service Check Execution
Option?
Ans: This option determines whether or not Nagios will execute Host/service checks
when it initially (re)starts. If this option is disabled, Nagios will not actively execute
any service checks and will remain in a sort of “sleep” mode. This option is most
often used when configuring backup monitoring servers or when setting up a
distributed monitoring environment.
Note: If you have state retention enabled, Nagios will ignore this setting when it
(re)starts and use the last known setting for this option (as stored in the state retention
file), unless you disable the use_retained_program_state option. If you want to change
this option when state retention is active (and the use_retained_program_state is
enabled), you’ll have to use the appropriate external command or change it via the
web interface.
Values are as follows:
0 = Don’t execute host/service checks
1 = Execute host/service checks (default)
7. Explain active and Passive check in Nagios?
Ans:    Nagios will monitor host and services in tow ways actively and
passively.Active checks are the most common method for monitoring hosts and
services. The main features of actives checks as as follows:Active checks are initiated
by the Nagios process
A. Active checks:
1.Active checks are run on a regularly scheduled basis
2.Active checks are initiated by the check logic in the Nagios daemon.
When Nagios needs to check the status of a host or service it will execute a plugin and
pass it information about what needs to be checked. The plugin will then check the
operational state of the host or service and report the results back to the Nagios
daemon. Nagios will process the results of the host or service check and take
appropriate action as necessary (e.g. send notifications, run event handlers, etc).
Active check are executed At regular intervals, as defined by the check_interval and
retry_interval options in your host and service definitions
On-demand as needed.Regularly scheduled checks occur at intervals equaling either
the check_interval or the retry_interval in your host or service definitions, depending
on what type of state the host or service is in. If a host or service is in a HARD state, it
will be actively checked at intervals equal to the check_interval option. If it is in a
SOFT state, it will be checked at intervals equal to the retry_interval option.
On-demand checks are performed whenever Nagios sees a need to obtain the latest
status information about a particular host or service. For example, when Nagios is
determining the reach ability of a host, it will often perform on-demand checks of
parent and child hosts to accurately determine the status of a particular network
segment. On-demand checks also occur in the predictive dependency check logic in
order to ensure Nagios has the most accurate status information.
b.Passive checks:
They key features of passive checks are as follows:
1.Passive checks are initiated and performed external applications/processes
2.Passive check results are submitted to Nagios for processing
The major difference between active and passive checks is that active checks are
initiated and performed by Nagios, while passive checks are performed by external
applications.
Passive checks are useful for monitoring services that are:
 Asynchronous in nature and cannot be monitored effectively by polling their
status on a regularly scheduled basis
 Located behind a firewall and cannot be checked actively from the monitoring
host
Examples of asynchronous services that lend themselves to being monitored passively
include SNMP traps and security alerts. You never know how many (if any) traps or
alerts you’ll receive in a given time frame, so it’s not feasible to just monitor their
status every few minutes.Passive checks are also used when configuring distributed or
redundant monitoring installations.

Here’s how passive checks work in more detail…


1. An external application checks the status of a host or service.
2. The external application writes the results of the check to the external
command file.
3. The next time Nagios reads the external command file it will place the results
of all passive checks into a queue for later processing. The same queue that is
used for storing results from active checks is also used to store the results from
passive checks.
4. Nagios will periodically execute a check result reaper event and scan the check
result queue. Each service check result that is found in the queue is processed in
the same manner – regardless of whether the check was active or passive. Nagios
may send out notifications, log alerts, etc. depending on the check result
information.
8. How to verify Nagios configuration ..?
Ans:  In order to verify your configuration, run Nagios with the -v command line
option like so:
/usr/local/nagios/bin/nagios -v /usr/local/nagios/etc/nagios.cfg

If you’ve forgotten to enter some critical data or misconfigured things, Nagios will
spit out a warning or error message that should point you to the location of the
problem. Error messages generally print out the line in the configuration file that
seems to be the source of the problem. On errors, Nagios will often exit the pre-flight
check and return to the command prompt after printing only the first error that it has
encountered.

9. What Are Objects?


Ans:    Objects are all the elements that are involved in the monitoring and
notification logic.
Types of objects include:
Services  are one of the central objects in the monitoring logic. Services are
associated with hosts Attributes of a host (CPU load, disk usage, uptime, etc.)
Service Groups :are groups of one or more services. Service groups can make it easier
to (1) view the status of related services in the Nagios web interface and (2) simplify
your configuration through the use of object tricks.
Hosts  are one of the central objects in the monitoring logic.Hosts are usually physical
devices on your network (servers, workstations, routers, switches, printers, etc).
Host Groups   are groups of one or more hosts. Host groups can make it easier to (1)
view the status of related hosts in the Nagios web interface and (2) simplify your
configuration through the use of object tricks
Contacts Conact information of  people involved in the notification process
Contact Groups are groups of one or more contacts. Contact groups can make it
easier to define all the people who get notified when certain host or service problems
occur.
Commands are used to tell Nagios what programs, scripts, etc. it should execute to
perform ,Host and service checks and when Notifications should send etc.
Time Periods are are used to control ,When hosts and services can be monitored
Notification Escalations Use for escalating the the notification
10. What Are Plugins?
Ans:    Plugins are compiled executable s or scripts (Perl scripts, shell scripts, etc.)
that can be run from a command line to check the status or a host or service. Nagios
uses the results from plugins to determine the current status of hosts and services on
your network.
Nagios will execute a plugin whenever there is a need to check the status of a service
or host. The plugin does something (notice the very general term) to perform the
check and then simply returns the results to Nagios. Nagios will process the results
that it receives from the plugin and take any necessary actions (running event
handlers, sending out notifications, etc).
11. How Do I Use Plugin X?
Ans:    We have to download the plugins from nagios exchange
https://exchange.nagios.org/. Then check the nagios plugin by running manually.
Most all plugins will display basic usage information when you execute them using ‘-
h’ or ‘–help’ on the command line.
12. How to generate Performance graphs..?
Ans: In Nagios Core there is no inbuilt option to generate the performance graphs,
We have to install pnp4nagios and add hosts and services URL’s in defination files.
13. What is the difference between NagiosXI
and Nagios Core ..?
Ans:  NagiosXI is a Paid version and Nagios core is a free version.

NagiosXI includes lot of features which we can modify using web interface. Nagios
Core default not include all the features we have to implement by installing plugins.

14. When Does Nagios Check For External


Commands?
Ans:     At regular intervals specified by the command_check_interval option in the
main configuration file
Immediately after event handlers are executed. This is in addition to the regular cycle
of external command checks and is done to provide immediate action if an event
handler submits commands to Nagios.
External commands that are written to the command file have the following format
[time] command_id;command_arguments
where time is the time (in time_t format) that the external application submitted the
external command to the command file. The values for the command_id and
command_arguments arguments will depend on what command is being submitted to
Nagios.
15. Explain Nagios State Types?
Ans:   The current state of monitored services and hosts is determined by two
components:
The status of the service or host (i.e. OK, WARNING, UP, DOWN, etc.)
Tye type of state the service or host is in
There are two state types in Nagios – SOFT states and HARD states. These state types
are a crucial part of the monitoring logic, as they are used to determine when event
handlers are executed and when notifications are initially sent out.

A.Soft States:
When a service or host check results in a non-OK or non-UP state and the service
check has not yet been (re)checked the number of times specified by the
max_check_attempts directive in the service or host definition. This is called a soft
error.
When a service or host recovers from a soft error. This is considered a soft recovery.
The following things occur when hosts or services experience SOFT state changes:
The SOFT state is logged. Event handlers are executed to handle the SOFT state.
SOFT states are only logged if you enabled the log_service_retries or log_host_retries
options in your main configuration file.
The only important thing that really happens during a soft state is the execution of
event handlers. Using event handlers can be particularly useful if you want to try and
proactively fix a problem before it turns into a HARD state. The
$HOSTSTATETYPE$ or $SERVICESTATETYPE$ macros will have a value of
“SOFT” when event handlers are executed, which allows your event handler scripts to
know when they should take corrective action.
B.Hard states :
 occur for hosts and services in the following situations:
 When a host or service check results in a non-UP or non-OK state and it has
been (re)checked the number of times specified by the max_check_attempts
 option in the host or service definition. This is a hard error state.
 When a host or service transitions from one hard error state to another error
state (e.g. WARNING to CRITICAL).
 When a service check results in a non-OK state and its corresponding host is
either DOWN or UNREACHABLE.
 When a host or service recovers from a hard error state. This is considered to be
a hard recovery.
 When a passive host check is received. Passive host checks are treated as
HARD unless the passive_host_checks_are_soft option is enabled.
The following things occur when hosts or services experience HARD state changes:
The HARD state is logged.
Event handlers are executed to handle the HARD state.
Contacts are notifified of the host or service problem or recovery.
The $HOSTSTATETYPE$ or $SERVICESTATETYPE$ macros will have a value of
“HARD” when event handlers are executed, which allows your event handler scripts
to know when they should take corrective action.

16. What is State Stalking?


Ans:     Stalking is purely for logging purposes.When stalking is enabled for a
particular host or service, Nagios will watch that host or service very carefully and log
any changes it sees in the output of check results. As you’ll see, it can be very helpful
to you in later analysis of the log files. Under normal circumstances, the result of a
host or service check is only logged if the host or service has changed state since it
was last checked. There are a few exceptions to this, but for the most part, that’s the
rule.
If you enable stalking for one or more states of a particular host or service, Nagios
will log the results of the host or service check if the output from the check differs
from the output from the previous check.
17. Explain how  Flap Detection works in
Nagios?
Ans:  Nagios supports optional detection of hosts and services that are “flapping”.
Flapping occurs when a service or host changes state too frequently, resulting in a
storm of problem and recovery notifications. Flapping can be indicative of
configuration problems (i.e. thresholds set too low), troublesome services, or real
network problems.
Whenever Nagios checks the status of a host or service, it will check to see if it has
started or stopped flapping. It does this by:

1. Storing the results of the last 21 checks of the host or ser vice
2. Analyzing the historical check results and determine where state
changes/transitions occur
3. Using the state transitions to determine a percent state change value (a measure
of change) for the host or service
4. Comparing the percent state change value against low and high flapping
thresholds
5. A host or service is determined to have started flapping when its percent state
change first exceeds a high flapping threshold.
6. A host or service is determined to have stopped flapping when its percent state
goes below a low flapping threshold (assuming that is was previously flapping).
7. The historical service check results are examined to determine where state
changes/transitions occur. State changes occur when an archived state is different
from the archived state that immediately precedes it chronologically. Since we
keep the results of the last 21 service checks in the array, there is a possibility of
having at most 20 state changes. In this example there are 7 state changes,
indicated by blue arrows in the image above.
The flap detection logic uses the state changes to determine an overall percent state
change for the service. This is a measure of volatility/change for the service. Services
that never change state will have a 0% state change value, while services that change
state each time they’re checked will have 100% state change. Most services will have
a percent state change somewhere in between.

18. Explain Distributed Monitoring ?


Ans:   Nagios can be configured to support distributed monitoring of network services
and resources.
When setting up a distributed monitoring environment with Nagios, there are
differences in the way the central and distributed servers are configured.
The function of a distributed server is to actively perform checks all the services you
define for a “cluster” of hosts. it basically just mean an arbitrary group of hosts on
your network. Depending on your network layout, you may have several clusters at
one physical location, or each cluster may be separated by a WAN, its own firewall,
etc. There is one distributed server that runs Nagios and monitors the services on the
hosts in each cluster. A distributed server is usually a bare-bones installation of
Nagios. It doesn’t have to have the web interface installed, send out notifications, run
event handler scripts, or do anything other than execute service checks if you don’t
want it to.
The purpose of the central server is to simply listen for service check results from one
or more distributed servers. Even though services are occasionally actively checked
from the central server, the active checks are only performed in dire circumstances,
19. What is NRPE?
Ans:  The Nagios Remote Plugin Executor addon is designed to allow you to execute
Nagios plugins on remote Linux/Unix machines. The main
reason for doing this is to allow Nagios to monitor “local” resources (like CPU load,
memory usage, etc.) on remote machines. Since these public resources are not usually
exposed to external machines, an agent like NRPE must be installed on the remote
Linux/Unix machines.
The NRPE addon consists of two pieces:
– The check_nrpe plugin, which resides on the local monitoring machine
– The NRPE daemon, which runs on the remote Linux/Unix machine
When Nagios needs to monitor a resource of service from a remote Linux/Unix
machine:
– Nagios will execute the check_nrpe plugin and tell it what service needs to be
checked
– The check_nrpe plugin contacts the NRPE daemon on the remote host over an
(optionally) SSL-protected connection
– The NRPE daemon runs the appropriate Nagios plugin to check the service or
resource
– The results from the service check are passed from the NRPE daemon back to
the check_nrpe plugin, which
then returns the check results to the Nagios process.
20.What is NDOUTILS ?
Ans:  The NDOUTILS addon is designed to store all configuration and event data
from Nagios in a database. Storing information from Nagios in a database will allow
for quicker retrieval and processing of that data and will help serve as a foundation for
the development of a new PHP-based web interface in Nagios 4.1.
MySQL databases are currently supported by the addon and PostgreSQL support is in
development.
The NDOUTILS addon was designed to work for users who have:
– Single Nagios installations
– Multiple standalone or “vanilla” Nagios installations
– Multiple Nagios installations in distributed, redundant, and/or failover
environments.
Each Nagios process, whether it is a standalone monitoring server or part of a
distributed, redundant, or failover monitoring setup, is referred to as an “instance”. In
order to maintain the integrity of stored data, each Nagios instance must be labeled
with a unique identifier or name.

21. What are the components that make up the


NDO utilities ?
Ans:
There are four main components that make up the NDO utilities:
1. NDOMOD Event Broker Module : The NDO utilities includes a Nagios
event broker module (NDOMOD.O) that exports data from the Nagios
daemon.Once the module has been loaded by the Nagios daemon, itcan access all
of the data and logic present in the running Nagios process.The NDOMOD
module has been designed to export configuration data, as well as information
about various run time events that occur in the monitoring process, from the
Nagios daemon. The module can send this data to a standard file, a Unix domain
socket, or a TCP socket.
2. LOG2NDO Utility : The LOG2NDO utility has been designed to allow you to
import historical Nagios and NetSaint log files into a database via the NDO2DB
daemon (described later). The utility works by sending historical log file data to a
standard file, a Unix domain socket, or a TCP socket in a format the NDO2DB
daemon understands. The NDO2DB daemon can then be used to process that
output and store the historical log file  information in a database.
3. FILE2SOCK Utility :  The FILE2SOCK utility is quite simple. Its reads input
from a standard file (or STDIN) and writes all of that data to either a Unix domain
socket or TCP socket. The data that is read is not processed in any way before it is
sent to the socket.
4. NDO2DB Daemon:   The NDO2DB utility is designed to take the data output
from the NDOMOD and LOG2NDO components and store it in a MySQL or
PostgreSQL database.When it starts, the NDO2DB daemon creates either a TCP
or Unix domain socket and waits for clients to connect. NDO2DB can run either
as a standalone, multi-process daemon or under INETD (if using a TCP socket).
Multiple clients can connect to the NDO2DB daemon’s socket and transmit data
simultaneously. A separate NDO2DB process is spawned to handle each new
client that connects. Data is read from each client and stored in a user-specified
database for later retrieval and processing.
22. What are the Operating Systems we can
monitor using Nagios..?
Ans:  Any Operating System We can monitor using Nagios, OS should support to
install Nagios Clinet either SNMP.
23. What is database is used by Nagios to store
collected status data..?

find command practical examples Can


Improve Your Skills
BY ARK · PUBLISHED FEBRUARY 5, 2017 · UPDATED JUNE 1, 2018
Search for files in a directory hierarchy, finding the files and directories in Linux is
very easy using find command. Find will search any set of directories you
specify for files that match the given search criteria. You can search for files by
name, owner, group, type, permissions, data and other criteria. Learn find
command practical examples.
Syntax: find <directory path> <Search pattern> <action>

1. Find Command without any options


It will list out all the files and folders in current directory including hidden along with
path. $find Or $find . Or $find -print Or $find . -print

[root@ArkIT-Serv ~]# find

find command

2. Search Files and Directories using name


Looking files and directories based there names simple, we have to use -
name parameter. In below example, searching for resolvebugs.txt file. Here you
have to provide the name exact as file name because -name option is case sensitive.

[root@ArkIT-Serv ~]# find . -name resolvedbugs.txt

./resolvedbugs.txt

Ignore case we have to use -iname option


[root@ArkIT-Serv ~]# find . -iname Resolvedbugs.txt

./resolvedbugs.txt

[root@ArkIT-Serv ~]# find . -name Resolvedbugs.txt

find -name
3. Search Only files out of all
Only files i want to list out of all directories and files. We have an option -type
f to fetch only files, here f means file

[root@ArkIT-Serv ~]# find . -type f -iname file1.txt

find only files


4. Search Only Directories within Linux Server
List only directories again we have to use -type d to fetch only directories, here d
means directory.

[root@sny-fusion ~]# find . -type d -iname file11


Find directories

5. Find all files which are end with same


file extension
Some times we did not remember what is the file name, we just know file
extension, so find all same extension files in particular path. In this case we are
using * wild card character which replace one or more characters.

[root@ArkIT-Serv ~]# find . -name "*.txt"

find all text files


6. Locate the files based on their permissions
Ultimate feature of find command is to search files / directories based an their
permissions. Some times we did not remember file name Or File extension what
we simply remember it’s permissions, still you can find files / directories. This
option is very useful for us because if would like to find all full permission files
and directories because they are vulnerable.

[root@ArkIT-Serv ~]# find . -perm 0777


Find files and directories based on permissions
7. Files without permissions
As we seen in 6th step is to find all full permission files, now find which are not
having mentioned permissions. Here ! = not equal to.

[root@ArkIT-Serv ~]# find . ! -perm 0777

exclude provided string

8. Search for SGID files / directories using


find
SGID = Set Group ID for execution.  All files / directories which as SGID
permissions. 

[root@ArkIT-Serv ~]# find / -perm 2755

9. Search for SGID files / Directories using alpha permission


values
Finding files / directories using numerical permission values, in same way we can
also find using character based permissions. 
[root@ArkIT-Serv ~]# find / -perm /u+s

find SUID files


10. Find Sticky bit files
Sticky bit is an special permission to execute files with other user permissions

[root@ArkIT-Serv ~]# find / -perm 1755

Find sticky bit files

11. All SUID set of files


Set User ID is also same like SGID. In SGID we execute with Group ID. SUID we
execute using Other User ID.

[root@ArkIT-Serv ~]# find / -perm /u=s

12. Search for Executable files in Server


It’s more important that we should not have executable permissions to unwanted
files / not required files. Simply find all executable files and remove there
permissions so that protecting our own environment.

 [root@sny-fusion ~]# find . -perm /a+x


13. Find Read only files
Restricted with read only, we can find using below command

[root@sny-fusion ~]# find . -perm /u=r

find read only files

14. Find files based on permissions and


replace there permissions
Wow..!! amazing right with single command we can change entire files /
directories permissions

[root@ArkIT-Serv ~]# find . -perm 777 -exec chmod 700 {} \;

[root@ArkIT-Serv ~]# find . -perm 777 -print0 | xargs -0 chmod 755

15. Search for file in multiple directories at the same time


Multiple paths can be accepted in find command simply provide multiple paths one
after another find will execute search in both the paths

[root@ArkIT find]#find /root/find/ /root/ -name file1.txt


/root/find/file1.txt

/root/find/file1.txt

16. Delete files which are find in search criteria 


Along with find command make use of -exec to execute continuous commands

find /root/ -type f -name "*.txt" -exec rm -f {} \;

remove text files

find / -type f -name "*.txt" -print 0 | xargs -0 rm -f

same can be done using xargs as well

17. Remove Empty files


Do you want just delete / remove empty files from multiple paths by single
command use below command

find / -type f -empty -exec rm -f {} \;


delete empty files using find command
18. Delete empty directories from multiple paths
Empty directories on system there is no use with them, we can simply delete.
Deleting empty directories from system will clear clutter, can be achieved using
below command

find / -type d -empty -exec rm -f {} \;

19.  Modified files list in last 24 Hours + 10 minutes 


File’s data was last modified n*24 hours ago. Awesome option, identify list of files
which are modified in last 24hours + n minutes. This will help you out in problem
solving, if you know which file is modified recently by some other user. -mtime
means file modified time.

find /root/find/ -mtime 10

mention range from in between 24hours + ten minutes to 24 Hours -20 minutes

find /root/find/ -mtime +10 -mtime -20

20. Modified Files in Last 10 Minutes


19th option will give modified files list after 24hours but -mmin can provide
results within minutes. File’s data was last modified n minutes ago.

find /root/find/ -mmin 1


21. Find command – files based on user
permissions
Search the files based on user ownership. Find-out how many files owned by
particular user.

[root@ArkIT ~]#find / -user admin

/var/spool/mail/admin

22. Find files and directories based on group ownership


Locate files and directories based on their group ownership permissions. In below
example admin is the group name. find command is most useful over here.

find / -group admin

23. Large files in system can be find using -size option


find command support to search the files based on their size, Here M = MB, G = GB

find / -size 100M

24. specify how depth want to search


Descend at most levels (a non-negative integer) levels of directories below the
command line arguments.
-maxdepth 0 means only apply the tests and actions to the command line
arguments.
find / -maxdepth 3 -name "*file"

25. Find No User and No Group owned files and directories


Files and directories which does not have user assigned / group assigned can be
find using below command

find / -nouser -o -nogroup

Conclusion: There are N number of examples, lot many options with find
command is available in single article can’t be fit. We try to elaborate in upcoming
posts.

Master The Skills Of Linux File


System And Be Successful
BY ARK · PUBLISHED JULY 23, 2016 · UPDATED JULY 23, 2016
Linux file system is a method to partition the Hard Disk drive into multiple
partitions. Partitions are used to store the data by making Linux File System in it.
Basically Linux File systems are two types as shown below

Linux File System


Linux File System

Local File Systems are used to format partitions into usable without making file
system in partitions we can’t store data. Just making the partitions will turn them as
RAW. Partitions are used to organize users data on a Hard Disk.
When you make an Extended File System it will create an different types off
blocks to segregate data store
1. Master Blocks / Boot Blocks
2. Super Blocks
3. Inode Blocks
4. Data Blocks
Master Blocks / Boot Blocks : Only boot partitions contain
master blocks data. Remaining partitions master blocks are empty.
Super Blocks : Just like an index to the book and it works holds to the
information as follows
 Utilized inode numbers
 Free inode numbers
 Utilized data blocks
 Free data blocks
Inode table (index table) which holds all the information about
files/directories like permissions, owner, group name, size and time stamps.
 4096 bytes default block size
 15 data blocks = inode
If data size is more than 100MB block size is 4096bytes. If data size is less than
100MB block size is 1024bytes.
Data block storage of files
Below is the File System comparison in brief
File Max File Max Partition
Journal-ling Notes
System Size Size
Fat16 2 GB 2 GB No Legacy
Fat32 4 GB 8 TB No Legacy
(For Windows Compatibility) NTFS-3g
NTFS 2 TB 256 TB Yes is installed by default in Ubuntu, allowing Read/W
support
ext2 2 TB 32 TB No Legacy
Standard linux filesystem for many
ext3 2 TB 32 TB Yes
years. Best choice for super-standard installatio
Modern iteration of ext3. Best choice
ext4 16 TB 1 EB Yes for new installations where super-standard isn’
necessary
reiserFS 8 TB 16 TB Yes No longer well-maintained
Yes
JFS 4PB 32PB Created by IBM – Not well maintained
(metadata)
Yes Created by SGI. Best choice for a mix
XFS 8 EB 8 EB
(metadata) of stability and advanced journaling
GB= Gigabyte (1024 MB)    TB = Terabyte (1024 GB)    PB = Petabyte (1024 TB)    EB = Exabyte (1024 PB)

How the Partitions take place


Always partitions can be four at any point of time
Primary =3 and Extended =1 OR Primary=4
if you create an partition, numbers will be assigned as mentioned below

All primary partitions will directly assign 1 – 4 numbers, whereas 3 primary 1


extended will create like below. Extended partition number 4 just created we can’t
make any file system on that.

To Create partitions we have to use fdisk utility

[root@Techtutorials ~]# fdisk /dev/sdc


Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Command (m for help): n


All primary partitions are in use
Adding logical partition 6
First sector (825344-10485759, default 825344):↵
Using default value 825344
Last sector, +sectors or +size{K,M,G} (825344-10485759, default 10485759): +100M
Partition 6 of type Linux and of size 100 MiB is set

Command (m for help): p

Disk /dev/sdc: 5368 MB, 5368709120 bytes, 10485760 sectors


Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0xead8a888

   Device Boot      Start         End      Blocks   Id  System


/dev/sdc1            2048      206847      102400   83  Linux
/dev/sdc2          206848      411647      102400   83  Linux
/dev/sdc3          411648      616447      102400   83  Linux
/dev/sdc4          616448    10485759     4934656    5  Extended
/dev/sdc5          618496      823295      102400   83  Linux
/dev/sdc6          825344     1030143      102400   83  Linux

Command (m for help): wq


The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource
busy.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
Syncing disks.

by default when you create an extended partition it will not update kernel to update
kernel we have to execute below command

# partprobe /dev/sdc

[root@Techtutorials ~]# mkfs.ext4 /dev/sdc1


mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
Stride=0 blocks, Stripe width=0 blocks
25688 inodes, 102400 blocks
5120 blocks (5.00%) reserved for the super user
First data block=1
Maximum filesystem blocks=33685504
13 block groups
8192 blocks per group, 8192 fragments per group
1976 inodes per group
Superblock backups stored on blocks:
        8193, 24577, 40961, 57345, 73729

Allocating group tables: done


Writing inode tables: done
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done

Mount partition
Partition has been formatted with EXT4, to mount it permanently we have to add
an entry in /etc/fstab file.

[root@Techtutorials ~]# mkdir /data


[root@Techtutorials ~]# vi /etc/fstab
[root@Techtutorials ~]# cat /etc/fstab

#
# /etc/fstab
# Created by anaconda on Wed Jun 22 11:14:58 2016
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/rhel-root   /                       xfs     defaults        0 0
UUID=5b0f4ed0-592e-4114-9a8e-10a7b99d2cd3 /boot                   xfs    
defaults        0 0
/dev/mapper/rhel-swap   swap                    swap    defaults        0 0
/dev/sdc1       /data   ext4 defaults 0 0
[root@Techtutorials ~]# df -h
Filesystem             Size  Used Avail Use% Mounted on
/dev/mapper/rhel-root   18G  3.3G   15G  19% /
devtmpfs               1.2G     0  1.2G   0% /dev
tmpfs                  1.2G   80K  1.2G   1% /dev/shm
tmpfs                  1.2G  8.9M  1.2G   1% /run
tmpfs                  1.2G     0  1.2G   0% /sys/fs/cgroup
/dev/sda1              497M  124M  373M  25% /boot

[root@Techtutorials ~]# mount -a


[root@Techtutorials ~]# df -h
Filesystem             Size  Used Avail Use% Mounted on
/dev/mapper/rhel-root   18G  3.3G   15G  19% /
devtmpfs               1.2G     0  1.2G   0% /dev
tmpfs                  1.2G   80K  1.2G   1% /dev/shm
tmpfs                  1.2G  8.9M  1.2G   1% /run
tmpfs                  1.2G     0  1.2G   0% /sys/fs/cgroup
/dev/sda1              497M  124M  373M  25% /boot
/dev/sdc1               93M  1.6M   85M   2% /data

To mount partition temporarily we have to use below


command mount and unmount partition using umount

[root@Techtutorials ~]# mount /dev/sdc1 /data

[root@Techtutorials ~]# df -h /data

Filesystem      Size  Used Avail Use% Mounted on

/dev/sdc1        93M  1.6M   85M   2% /data

[root@Techtutorials ~]# umount /data

To Delete Partition follow below steps


 unmount file system
 Remove entry from fstab file
 delete partition
 Update kernel

[root@Techtutorials ~]# umount /data


[root@Techtutorials ~]# vi /etc/fstab
[root@Techtutorials ~]# mount -a
[root@Techtutorials ~]# cat /etc/fstab |grep /dev/sdc
##/dev/sdc1     /data   ext4 defaults 0 0
[root@Techtutorials ~]# fdisk /dev/sdc
Welcome to fdisk (util-linux 2.23.2).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Command (m for help): d


Partition number (1-6, default 6): 1
Partition 1 is deleted

Command (m for help): p

Disk /dev/sdc: 5368 MB, 5368709120 bytes, 10485760 sectors


Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0xead8a888

   Device Boot      Start         End      Blocks   Id  System


/dev/sdc2          206848      411647      102400   83  Linux
/dev/sdc3          411648      616447      102400   83  Linux
/dev/sdc4          616448    10485759     4934656    5  Extended
/dev/sdc5          618496      823295      102400   83  Linux
/dev/sdc6          825344     1030143      102400   83  Linux

Command (m for help): wq


The partition table has been altered!

Calling ioctl() to re-read partition table.


Syncing disks.
[root@Techtutorials ~]# partprobe /dev/sdc

Conclusion
Standard partition can be created using fdisk utility. Standard Linux File system
can’t be increased/decreased which is not flexible for production environment.
20 Good Linux Interview Questions
and Answers
BY ARK · PUBLISHED SEPTEMBER 1, 2016 · UPDATED SEPTEMBER 2, 2016
In this article we are going to see Interview questions and answers, which are
asked in one of the company interview. Exclusive Interview questions and answers
for you. 20 Good Linux interview questions and answers.

20 Good Linux Interview Questions and


Answers
1. How to check memory stats and CPU stats.?
Ans: Using vmstat command we can check memory stats and CPU stats. We can
also check memory usage and CPU usage in real time using top command
2. How to change the default run level in Linux.?
Ans: In RHEL/Centos 5/6 by changing the value in /etc/inittab file as mentioned
below

[root@Arkit-RHEL6 ~]# vi /etc/inittab |grep id


# Individual runlevels are started by /etc/init/rc.conf
id:5:initdefault:

3. What are the default ports used for SMTP,DNS,FTP,DHCP, SSH and
HTTP.?
Ans:
 SMTP = 25
 DNS = 53
 FTP = 20 and 21
 DHCP = 67 and 68
 SSH = 22
 HTTP = 80 and HTTPS = 443
4. How to check which ports are listening in my Linux Server.?
Ans: Using nmap, netstat and lsof commands we can check which are
the ports listening in local host
Command Examples:

# nmap -sT -O localhost 

# ss -tunlap

# netstat -anp

5. How to add & change the kernel parameters.?


Ans: We can change the kernel parameters using /etc/sysctl.conf file
6. What is Puppet Server.?
Ans: Puppet software is a open-source configuration management tool. Which will
support for multiple operating system such as Unix-like systems and Microsoft
windows.
7. What are Symbolic Links and hard links.?
Ans: Symbolic links are the links which reference to actual files with other
nicknames. We can add symbolic links to files and directories.
8. How to you execute more than one command or programs from Crontab
entry.?
Ans: It is well possible to run/execute more commands from single crontab
schedule by adding semicolon in between multiple commands.

# crontab -e
* * * * * cat /etc/passwd; ls -l /etc/ >> /tmp/etcfiles

9. Write a command that will look for files with an extension “c”, the string “apple” in
it.?
Ans: 

# find / -name "*.c" -print | xargs grep apple

10. What, if anything, is wrong with each of the following commands


 ls -l-s
 cat file1, file2
 ls -s Factdir
Ans:  There is no space used in ls -l-s command. Correct command is ls -l
-s. In cat command we do not use ,(comma) for reading multiple files. Correct
command is cat file1 file2
11. What is the difference between cron and anacron.?
Ans: cron jobs will run when server/machine is online 24/7. Anacron does not
required to be online 24/7 like server when machine is switched on scheduled jobs
will run
12. What are the fields in the /etc/passwd file.? Please explain.?
Ans: 

# cat /etc/passwd

charan:x:1003:1003:Administrator from HYD:/home/charan:/bin/bash

 charan = User Name


 1003 = UID
 1003 = GID group id
 Administrator from HYD = Description of the user
 /home/charan = Home directory of charan user
 /bin/bash = Default shell prompt is bash for charan user
13. How Environment variable is set so that the file permissions assign the newly
created files.?
Ans: By setting umask value newly created files will get default permissions
14. If you have only on IP address, but you want to host two web sites. What
will you do.?
Ans: Create multiple virtual hosts using different ports
15. How do you check for the httpd.conf consistency..?
Ans: Using apachectl configtest command we check http.conf file
consistency and errors
16. What is ‘.htaccess’ file in Apache web server.?
Ans: .htaccess file is a Hypertext Access file which is used to write URL
redirection and SSL certification configuration etc..
17. In ‘kill -9’ command, what is the ‘-9’ signal indicates..?
Ans: -9 represent SIGKILL which means Kill signal
18. What are the process states in Unix.?
Ans: 
 Running State
 Stopped State
 Sleeping State
 Uninterrupted sleep state
 Defcunt State Or zombie State 
19. List out different multi-processing modules in Apache web server
description about it.?
Ans: mpm_worker_module is a module multi-processing module
20. What are the different storage engines used in MySQL..?
Ans: 
Below are the few MySql storage engines
MyISAM.
InnoDB.
Memory.
CSV.
Merge.
Archive.
Federated.
Blackhole.
Thanks for the read. We hope above questions and answers are useful for you. 20
Good Linux Interview Questions and Answers

Related Articles
Nagios Admin interview Questions and Answers
Few More Interview Questions and Answers
Linux Interview Questions and
Answers
BY ANKAM RAVI KUMAR · PUBLISHED OCTOBER 11, 2015 · UPDATED OCTOBER 11, 2015

Linux Interview Questions and


Answers
1. You are tasked to build a new Linux workstation. User wants to install a word
processor and spreadsheets that offers a similar version for Microsoft Windows
system. Which office suite should you install?
Ans:- You should use Apache OpenOffice. Its free and open source project. And
works fine on both Window and Linux systems.

2. A technician uses the ps command to see what processes are running. When the
current running processes are shown, he notices a process that he terminated 10
minutes ago by using the kill command is still running. What command should he use
next to terminate this process?
Ans:- He should use -9 argument with kill command that will send a kill signal to the
process. This will terminate the specific process immediately.
 

3. A technician quickly notices a kernel error message on the screen during the boot
process. Unfortunately, the error message disappear so quickly for the technician to
read it all. What log directory can the technician use to examine boot-time messages?
Ans:- Linux system keeps almost all log files under the /var/log directory. Most of the
boot messages are kept in buffer, which can be accessed by using the dmesg
command. He can examine the /var/log/dmesg.log file. For boot time message he can
also check the /var/log/boot.log file.

4. A technician wants to view a list of all running processes on the server. How can he
do this?
Ans:- He should use the ps command with -ef argument. ps -ef command will show a
list of all running process.

5. Where inittab file is located?


Ans:- Default location of inittab file is /etc directory. This file describes which process
would be start at boot time.

6. A technician want to boot the system in CLI mode on start up. Which runlevel
should he assign and in which file ?
Ans:- He could assign runlevel 3 as the default runlevel in /etc/inittab file.

7. What program a technician can use to analyze program’s core dump files and to
debug the application while it is actually running?
Ans:- He can use gdb program to analyze program’s core dump files and also debug
the application while it is actually running.

 
8. As a technician you want to shutdown the Linux system. What command should
you use?
Ans:- You could use shutdown command.

9.  As a technician you need to perform a scheduled shutdown that will occur in 10
minutes. What should you use to shut down the server in 10 minutes.?
Ans:- You can use -h argument with shutdown command which allows you to specify
the time in second. To shutdown the system in 10 minute you should run shutdown -h
600 command.

10. What command will halt the system?


Ans:- halt will halt the system.

11. As a technician you need to restart the Apache Web Server. What command
should you use.?
Ans:- You could use following command to restart the Apache web server.

#service httpd restart

12. Which command will restart the FTP Server?


Ans:- #service vsftpd restart

Above command will restart the FTP server.

13. What line printer control command is used to control the operation of the line
printer system?
Ans:- lpc command is used with various argument to control the operations of line
printer system.

 
14. A technician wants to terminate an active spooling daemon on the local host
immediately and then disables printing for the specified printers. What command
should he use?
Ans:- He should use lpc command with abort options. lpc abort lpc abort terminates
an active spooling daemon on the local host immediately and then disables printing
for the specified printers,

15. What print command stops a spooling daemon after the current job completes and
disables printing?
Ans:- The lpc stop command stops a spooling daemon after the current job completes
and disables printing

16. What command allows you to directly see what jobs are currently in a printer
queue?
Ans:- The lpc command allows you to directly see what jobs are currently in a printer
queue

17. A technician wants to halt the Linux server. What command should he use ?
Ans:- He can use init 0 command to halt the Linux server.

18. What line printer command lets you remove print jobs from the printer queue?
Ans:- The lprm command will let you remove print jobs from the printer queue.

19. What is the default text editor of Linux which include almost every version of
Linux?
Ans:- Default editor of Linux is vi editor that can used to edit any ASCII text.
 

20. What command is used for combining a large number of files into one single file
for archival to tape?
Ans:- vi is a text editor that can be used to edit any ASCII text. It is especially useful
for editing programs.

21. Where do all your configurations for your services, programs, and daemons reside
by default?
By default, all configurations for your services, programs, and daemons reside in
the /etc directory.

22. What type of backup tape will only back up files that have changed since the
previous backup and clear the archive bit?
Ans:- An Incremental backup will backup only files that have changed since the
previous backup and clear the archive bit.

23. Which argument is used with tar command to create a new archive file?
-c argument is used to create new archive file.

24. Which argument is used with tar command to extract the files from archive ?
Ans:- -x argument is used with tar command to extract the files form archive.

25. What is default name of super or administrator account name in Linux?


Ans:- Super or administrator account in Linux is known as root user.

 
26. A technician is going to install Linux on a workstation. The technician wants to
customize the installation. What type of installation will the technician use to
customize the installation?
Ans:- Only a custom installation can be used to customize what is installed during an
installation. A custom installation will allow you to choose what packages you want to
install and what packages you don’t want to install.

27. Where is the password file for Linux located?


Ans:- The password file for Linux is located by default in the /etc/passwd location.

28. Which program is mostly used for remote login securely in Linux?
Ans:- SSH is used for secure login. SSH is the replacement of old unsecure services
like telnet.

29. What file contains a list of user names that is not allowed to log in to the FTP
server?
Ans:- The ftpusers file contains a list of usernames that a Linux administrator has
previously set to not allow specific users to login to the FTP server. ftpusers file is
located in /etc/vsftpd directory.

30. Which command can be used to schedule recurring tasks?


Ans:- Cron command can be used to set scheduled recurring tasks.

31. In which directory Linux store crontab files for particular users?
Ans:- The /var/spool/cron is the directory where user’s crontabs are saved with a
directory for each user in which all user’s cron jobs are stored.
 

32. What command should you use to activate a swap partition?


Ans:- swapon command is used to activate the swap partition.

33. A technician is verifying the network configuration of a Linux server. Which


command he should used to accomplish this?
Ans:- ifconfig is the proper command to examine network configuration.

34. A technician wants to assign IP addresses to all the systems that will connect to
the server automatically. What type of server he should set up?
Ans:- He should set up DHCP Server which assigns IP address to client automatically
on start up.

35. A technician wants to add a new user to the current domain. What command will
the technician use to accomplish this?
Ans:- He should use useradd command followed by the username will create a new
user or update default new user information. You need to specify the password
separately with the passwd command.

36. What option a technician can use with usermod command to unlock to user’s
password?
Ans:- The -U option is used with usermod command to unlock the user’s password.

37. What option of the mkfs command should you use to check the device for bad
blocks before building the file system?
Ans:- The –c option when used with the mkfs command will check the device for bad
blocks before building the file system.

38. What at command argument will send mail to the user when the job has
completed, even if there was no output?
Ans:- -m argument with at command will send mail to the users when the job has
completed even if there was no output.

39. A user wants to verify the current active shell. Which command will he use?
Ans:- He should use the env command to verify the current active shell

40. What command can a technician use to search for a specific file?
Ans:- He can use either find or locate command to search for a specific file.

41. How can you send the output of a file to another file?
Ans:- The > option is used to send the output of a file to another file.

42. What is the -t option with fsck command used for?


Ans:- The –t option used with fsck is used to specify the type of filesystem to be
checked.

43. Which utility should you use to display the CPU processes?
Ans:- top utility lets you see all on one screen how much memory and CPU usage that
you are currently using, and also the resource usage by each program and process.

 
44. What command can you use to obtain information about your serial port resource
usage, such as IRQ and IO addresses?
setserial is a utility that you can use to obtain information about serial port resource
usage, such as IRQ and IO addresses.

45. A technician wants to delete the a user account. Which command should he use?
Ans:- The userdel command is used to delete a user from the system.

46. Which command is used to change from one directory to another?


Ans:- cd command is used to navigate the Linux hierarchical file system structure, use
the cd command to change from one directory to another.

47. A user wants to copy a file from the /tmp directory to the his home directory.
Which command would he use?
Ans:- He can use cp command to copy the files from one directory other directory.

48. What is the file extension of Red Hat Package manager?


Ans:- RPM extension is associated with the Red Hat Package manager

49. What command can you use to mount a CD-ROM drive?


Ans:- mount command will mount the CD-ROM.

50. A technician wants to monitor connections to a Linux server. Which command


should the technician use?
He should use netstat command. Netstat is a perfect way to see and monitor the both
inbound and outbound connections. This command also be used to view packet
statistics so you can see how many packets have been sent and received.

51. Which command a user can use to exit a login shell?


Ans:- The logout or exit command will exit him from a login shell.

52. A technician is having problems connecting to a mail server. What command can
he use to test if the mail server is on the network?
Ans:- He can use ping command to test connectivity between local system and remote
server

Squid Proxy Server Installation


RHEL7
BY ARK · PUBLISHED APRIL 17, 2016 · UPDATED JANUARY 9, 2019
squid proxy server is used to filter web traffic and reducing and fine tuning internet
bandwidth.
Squid was originally developed as the Harvest object cache, part of the Harvest
project at the University of Colorado Boulder. Further work on the program was
completed at the University of California, San Diego and funded via two grants
from the National Science Foundation. Duane Wessels forked the “last pre-
commercial version of Harvest” and renamed it to Squid to avoid confusion with
the commercial fork called Cached 2.0, which became NetCache. Squid version
1.0.0 was released in July 1996.
Squid is now developed almost exclusively through volunteer efforts.

Squid Proxy Server


 Packages : squid*
 Service Name: squid
 Default port : 3128
 Config File : /etc/squid/squid.conf
 Log file Path: /var/log/squid
 Environment : RHEL 7, Centos 7 and RHEL 6
Installation Required packages

# yum install squid*

Enable and start the Service


# systemctl enable squid

# systemctl start squid

Allow firewall port of squid

[root@server ~]# firewall-cmd --permanent --add-port=3128/tcp

success

[root@server ~]# firewall-cmd --reload

success

Default port of squid proxy is 3128 that’s why we have to allow port 3128.

Access Control List 


Open the configuration file and write the ACL as per requirement in ACL we can
do so many things
1. Restricting un-wanted (BAD) URL’s
2. Restrict access to internet based on time period
3. Control Downloads
4. Restrict file type downloads
5. Allow Networks to enable Internet access
6. Download speed control

[root@server ~]# vim /etc/squid/squid.conf

To allow Network we have to write below ACL lines

acl localnet src 192.168.4.0/24 

http_access allow localnet

To allow ports using ACL

acl Safe_ports port 80 # http

acl Safe_ports port 21 # ftp

acl Safe_ports port 443 # https

acl Safe_ports port 70 # gopher

acl Safe_ports port 210 # wais


acl Safe_ports port 1025-65535 # unregistered ports

acl Safe_ports port 280 # http-mgmt

acl Safe_ports port 488 # gss-http

acl Safe_ports port 591 # filemaker

acl Safe_ports port 777 # multiling http

http_access deny !Safe_ports

Block bad sites

acl badsites url_regex "/etc/squid/badsites"


http_access deny badsites

write the bad sites in the file

# cat /etc/squid/badsites
.facebook.com
.twitter.com
.youtube.com
.linkedin
.msn.com
.myspace.com
.flickr.com
.google

Block File downloads

acl blockfiles urlpath_regex "/etc/squid/blockfiles.acl"


http_access deny blockfiles
Block file type downloads, below is the example file to deny mp3, mp4, flv avi,
3gp, mpg and mpeg.

# cat /etc/squid/blockfiles.acl
\.torrent$
\.mp3.*$
\.mp4.*$
\.3gp.*$
\.[Aa][Vv][Ii]$
\.[Mm][Pp][Gg]$
\.[Mm][Pp][Ee][Gg]$
\.[Mm][Pp]3$
\.[Ff][Ll][Vv].*$

Time based access, which deny internet access from morning 10 Hours to 19
Hours

acl work_hours time 10:00-19:00


http_access deny work_hours

restricting download speed ACL

acl speedcontrol src 192.168.4.0/24


delay_pools 1
delay_class 1 2
delay_parameters 1 524288/524288 52428/52428
delay_access 1 allow speedcontrol

Go to Client Side
Change the proxy address in your browser then try to access the website
IE Settings > Internet options > Connections > Lan Settings >
provide IP address and port number
Now see the logs watch the squid logs
/var/log/squid/ log file directory
The logs are a valuable source of information about Squid workloads and
performance. The logs record not only access information, but also system
configuration errors and resource consumption (eg, memory, disk space). There are
several log file maintained by Squid. Some have to be explicitly activated during
compile time, others can safely be deactivated during.
 /var/log/squid/access.log : Most log file analysis program are based on the
entries in access.log. You can use this file to find out who is using squid server
and what they are doing etc
 /var/log/squid/cache.log : The cache.log file contains the debug and error
messages that Squid generates. If you start your Squid using the default
RunCache script, or start it with the -s command line option, a copy of certain
messages will go into your syslog facilities. It is a matter of personal
preferences to use a separate file for the squid log data.
 /var/log/squid/store.log : The store.log file covers the objects currently kept
on disk or removed ones. As a kind of transaction log it is ususally used for
debugging purposes. A definitive statement, whether an object resides on your
disks is only possible after analysing the complete log file. The release
(deletion) of an object may be logged at a later time than the swap out (save to
disk).
HOW DO I VIEW SQUID LOG FILES /
LOGS?
You can use standard UNIX / Linux command such as grep / tail to view log files.
You must login as root or sudo command to view log files.
Display log files in real time
Use tail command as follows:

~]# tail -f /var/log/squid/access.log

OR

~]$ sudo tail -f /var/log/squid/access.log

Search log files


Use grep command as follows:

~]#grep 'string-to-search' /var/log/squid/access.log

That’s about squid proxy server installation and configuration

Related Articles
 Installation and configuration of FTP server in RHEL7
 Collect system information using shell script in second
 Time server installation and configuration
 Audit Linux Machine Extremely helpfull
 Network File system shares configuration NFS
 Maria DB installation alternate to MySQL
 Firewalld Installation and Configuration RHEL7
 Analyse server performance RHEL7
Search Strings
squid proxy server installation and configuration
squid proxy in rhel7
restricted internet access using proxy
control internet download speed
URL Filtering using Linux proxy server

You might also like