Day 4

Download as pdf or txt
Download as pdf or txt
You are on page 1of 73

Welcome to Cyber Aces, Module 3!

This module provides an introduction to the Bash


Scripting.
This training material was originally developed to help students, teachers, and
mentors prepare for the Cyber Aces Online Competition. This module focuses on the
basics of system administration and scripting. . This session is part of Module 3,
System Administration. This module is split into three sections, Bash, PowerShell, and
Python. In this session, we will continue our examination of Bash.
The three modules of Cyber Aces Online are Operating Systems, Networking, and
System Administration.
For more information about the Cyber Aces program, please visit the Cyber Aces
website at https://CyberAces.org/.
Is this section, you will be introduced to BASH and scripts and we will quickly review
a few key topics from from the Linux section in Module 1.
Individuals who need to do a single task on a single machine will often use the GUI
because of its ease of use. With just a few clicks of a mouse, you can easily
accomplish complex tasks. However, those same clicks become tedious when the
same task must be performed several times a day or performed on multiple
computers. In those cases, resourceful IT professionals will often resort to
developing command line scripts. A command line script is written once and can
easily be run repeatedly, or be scheduled to run automatically by the computer.
When you are supporting hundreds or thousands of computers, manually interacting
with the programs becomes impractical and scripting is the only option. This course
is intended to provide you with the tools you need to perform common
administrative functions in some of the most popular scripting environments. We will
examine Python, and then we will examine using GNU Bash and Microsoft PowerShell
scripting from the command line to complete every day administrative functions.
A BASH script is a script that is run through the BASH (Bourne Again Shell) Shell. In its
simplest form, a BASH script is nothing more than a list of commands that would
otherwise be typed interactively at a BASH shell prompt. Scripts make it easier to
create complex combinations of commands and reuse the series of commands. BASH
has the ability to capture, manipulate and branch execution depending on the results
of those commands. BASH can read and modify files in the file system. BASH can also
manipulate processes and automatically perform many routine tasks for you.
The #!/bin/sh is used to specify the interpreter to use. In this case, it specifies the
Bourne Shell, but could have been the Bourne Again Shell (/bin/bash), C shell
(/bin/csh) or any other shell or interpreter installed on the system. The code in
the file must be written for the specific interpreter. For example, if
#!/usr/bin/python was used in the first line, the remaining code would have to
be understood by the Python language interpreter.
In the example script, the second line starts with a "#". The leading hash or pound
sign (#) designates the line as a comment and the interpreter will ignore the line. This
is commonly used to add extra information at the top of a script, such as the author,
file version, and similar information. It is also used to add an additional description of
what is happening in the script so it is easier for someone else to understand.
The remaining lines are executed in order. The first command changes the current
directory to /var/log. The remaining lines rename files (or technically move them
to the new name).
Pipes are used to connect the STDOUT of one program to the STDIN of another,
creating a pipeline for data to flow through a series of programs. To use pipes, place
the pipe character ("|", or shift-\) between commands. Pipes are often used to filter
the output of commands, such as to search for a particular string, or to sort a set of
data. The programs commonly used for these tasks (such as "grep", "sort", and
"uniq") are often referred to as filters.
For example, the following command will read in the list of users on the system,
search them for the string ":0:" (identifying users with UID or GID 0), and then sort
them alphabetically:
$ cat /etc/passwd | grep :0: | sort
Note that STDERR is still sent to the display unless it is redirected elsewhere. This
allows you to see any errors or warnings that may occur without them becoming part
of the pipeline.
Redirection allows you to redirect the standard I/O streams to different locations,
such as to a file or a pipe. For example, you can redirect STDIN to read data from a
file instead of from the keyboard, redirect STDOUT to write to a file instead of the
screen, and redirect STDERR to hide its output (such as by sending it to /dev/null, a
black hole that discards any data it receives). Here are some examples:
Redirect STDIN from a file:
$ command < file
Redirect STDOUT to a file:
$ command > file
Append STDOUT to a file:
$ command >> file
Redirect STDERR to a file (note the file descriptor "2"):
$ command 2> file
Append STDOUT to a file (write STDOUT to the end of an existing file):
$ command >> file
Redirect STDOUT and STDERR to a file (the "2>&1" sends 2 to the same location as 1):
$ command > file 2>&1
These operators can be combined, as in:
$ command < infile > outfile 2>> errlog
We have ">" and ">>" for output and "<" for input, but what about "<<"? "<<" is used
for appended input. Using "<<", you can continuously append input until an End of
File marker is reached. Consider the following commands typed at a BASH prompt.
$ cat << MyMarker > myfile.txt
> This is the first line of input.
> This is also appended to the input.
> All lines are appended until the marker is received
> MyMarker
This will cause "cat" to continuously accept input and print it to the screen until a line
containing nothing except the end of file marker is received. In the example, we used
"MyMarker" as the marker. You will often see "EOF" used as the end of file marker.
Another important syntax to learn is the use of the "`" (pronounced backtick or back
quote). This is not a single quote; it is the character that shares a key with the tilde
("~"). This character tells BASH to execute the command inside the backticks and
substitute the result of that command on the command line. For example, the
following command is equivalent to typing "ls -la":
$ ls `echo '-la'`
This is a very powerful feature of BASH and is used quite often in BASH
programming. For example, if you want to execute the "ifconfig" command, grab your
IP address from its output and then ping it, you can use command substitution to
accomplish that. First, we should figure out how to grab our IP address. One option is
to use the "ifconfig eth0" command, use the "grep" command to isolate the line that
contains the IP address, and then use the "awk" command to grab the second field
(containing the IP). Like this:
$ ifconfig eth0 | grep 'inet ' | awk '{print $2}' | sed
's/addr://'
Note: Your ethernet adapter may have a different name,
such as 'ens33'. You can verify your ethernet adapter
name by running "ifconfig" with no options.
The output of that string of commands is the IP address that is assigned to "eth0" (the
first network adapter). Now, if you want to PING the IP address returned by that
command, you could put those commands inside backticks like this:
$ ping `ifconfig eth0 | grep 'inet ' | awk '{print $2}' |
sed 's/addr://'`
The exclamation point, commonly referred to as bang for brevity, can be used to
repeat the previous command or reuse parameters passed to previous commands.
It is bad practice to use the root account for day-to-day tasks, but you will sometimes
need to access sensitive files or commands. When you try to access those resources
without the proper permissions, you will get an access denied error. Instead of
retyping the entire command (or hitting the up arrow and moving to the beginning of
the line) you can simply type "sudo !!". The bang bang will be replaced with the
previous command. Of course, this isn't just limited for use with sudo, but it is the
most frequent use of this shortcut.
Two similar shortcuts include !$ and !*. The !$ will be replaced with the last
parameter in the previous command and the !* will be replaced with all the
parameter to the previous command.
Example of searching a file and then deleting it:
$ grep somesearchstring myfile.txt
$ rm !$
rm myfile.txt
Example of fixing a typo with !*
$ vi cd /home/tm
$ !*
cd /home/tm
The caret (also called hat or circumflex) can be used to rerun the previous command
but replace a string within the command first. This is a very handy and efficient way
to fix typos or to repeat a previous command but without a lot of retyping or editing.
The syntax is:
^search^replacement
The following mistyped command can be quickly fixed without retyping (there is a
misspelling of the word install).
# yum isntall firefox
No such command isntall
# ^sn^ns
yum install firefox
Notice in this case, and with the bang commands on the previous page, that the
updated command is echoed on the command line for you.
This is also very handy to modify a previous command without a lot of retyping or
editing.
$ cat /var/log/myservice/logs/2010/10/24.php | grep error
| cut -d: -f 4
$ ^24^25
cat /var/log/myservice/logs/2010/10/25.php | grep error |
cut -d: -f 4
Here are three challenges for you to complete using the concepts learned in this
tutorial:
1. Write a command to write "line 1" to a new file. Then write a command that
appends "line 2" to the file.
2. Use cat to copy a file using redirection (Hint: you will need both the < and >
operators).
3. Using command substitution, write a command that finds the "man" command
and gets the permissions on the file (Hint: the "which" command is useful here).
There are multiple answers for these challenges. If you answer differs from ours,
great! Here are our solutions for the challenges.
1. Write a command to write "line 1" to a new file. Then write a command that
append "line 2" to the file.
$ echo line 1 > myfile.txt
$ echo line 2 >> myfile.txt
The first command will create the file and overwrite the file if it already exists.
The second command will append the text to the file and will not overwrite the
file.
Another more advanced solution using substitution is below. Note: The text
"Line 1" would be appended to an already existing file, it would not overwrite
the file.
$ echo line 1 >> myfile.txt
$ ^1^2
2. Use cat to copy a file using redirection (Hint: you will need both the < and >
operators).
$ cat < inputfile > outputfile
The text in the input file is fed into cat and is sent to STDOUT. The > is then
used to write the file. The following command would accomplish the same
task, but it doesn't demonstrate the use of an input file.
$ cat inputfile > outputfile
3. Using command substitution, write a command that finds the "man" command
and gets the permissions on the file (Hint: the "which" command is useful here).
$ ls -l `which man`
The -l (lowercase L) option will give us the long format which includes the
permissions.
Consider the following script
#!/bin/bash
echo 'file redirection is easy' > file1
cat < file1 > file2
echo "as easy as pie" >> file2
cat file2 | sort
What is the output from the last line of the script?
a. as easy as pie
file redirection is easy
b. file redirection is easy
c. as easy as pie
d. file redirection is easy
as easy as pie
What is the output from the last line of the script?
The answer is A
as easy as pie
file redirection is easy
• Create file1 with the contents of "file redirection is easy"
• Take file1 as input and write it to file2 (essentially a copy)
• Append "easy as pie" to file2
• Outputs the sorted contents of file2
Congratulations! You have completed the session on Linux review and introduction to
Bash scripting.
In the next session, we will discuss variables and script parameters.
Welcome to Cyber Aces, Module 3! This module provides an introduction to the Bash
Scripting. The discussion in this session will be about variables and parameters.

1
This training material was originally developed to help students, teachers, and
mentors prepare for the Cyber Aces Online Competition. This module focuses on the
basics of system administration and scripting. . This session is part of Module 3,
System Administration. This module is split into three sections, Bash, PowerShell, and
Python. In this session, we will continue our examination of Bash.
The three modules of Cyber Aces Online are Operating Systems, Networking, and
System Administration.
For more information about the Cyber Aces program, please visit the Cyber Aces
website at https://CyberAces.org/.

2
In this section, you will be introduced to variables, syntax, and script parameters.

3
Variables are representations of memory locations in which we can store arbitrary
values. For example, at a Bash prompt we can create a variable called "MESSAGE" and
assign it the value "Hello World" by typing the following:
$ MESSAGE="Hello World"
We can then retrieve the contents of that variable with the "echo" command:
$ echo $MESSAGE
Note that when assigning a variable, you do not prefix its name with a dollar sign, but
when accessing it you do. Also note that there is no whitespace between the variable
name, the equals sign, and the value.
We can also store the results of the executed programs in variables by using inline
process execution (aka Command Substitution). For example:
$ RESULT=`ping -c 1 192.168.100.1`
$ echo $RESULT
PING 192.168.100.1 (192.168.100.1): 56 data bytes
64 bytes from 192.168.100.1: icmp_seq=0 ttl=64
...trimmed for brevity...

4
As an alternative to inline commands, you could place your commands inside of
another Bash script and pass the output of one command to your new script.
Parameters in a BASH script are referred to by their position on the command line.
"$1" is the first parameter, "$2" is the second parameter, etc. "$@" is all of the
parameters that were passed on the command line.
The contents of our sample script, test.sh:
#!/bin/sh
echo "all parameters are: $@"
echo "number of parameters: $#"
echo "the first parameter is: $1"
echo "the second parameter is: $2"
A sample execution of our script:
$ sh test.sh aaa bbb ccc
all parameters are: aaa bbb ccc
number of parameters: 3
the first parameter is: aaa
the second parameter is: bbb

5
Environment variables are variables used by the system and applications to read
configuration settings from the system. The environment variables communicate
information such as the user's home directory, the current shell, the username, and
many other settings. This makes it easier for applications to interact with the system
in a consistent fashion.
To set an environment variable we use the command "export". This environment
variable will be set for all processes launched from the parent process. I.E. all
processes launched from the shell where the "export" command is used.
Some environment variables are predefined.
PWD: The current directory
HOME: The path to the user's home directory
SHELL: The path to the current shell (e.g. /bin/bash)
PATH: The directories the shell will search to look for executables
USER & USERNAME: The name of the current user
You can look at all the environment variables set in your shell by typing "env".

6
The "let" command allows you to do mathematic expressions in BASH. Without it,
variables are treated as text. For example:
$ A=44
$ echo $A
44
$ B=$A+1
$ echo $B
44+1
If you want to evaluate a mathematic expression, you use the "let" command.
$ A=44
$ echo $A
44
$ let B=$A+1
$ echo $B
45

7
The left column is pretty straightforward, but the right column is probably new to
you. These operators will modify the variable itself. The command A+=5 can be
written as A=A+5. Here are examples of the different operators in the right column:
$ A=20
$ let A+=5
$ echo $A
25
$ A=20
$ let A-=5
$ echo $A
15
$ A=20
$ let A/=5
$ echo $A
4
$ A=20
$ let A%=6
$ echo $A
2

8
What is the output from the following commands?
$ A=5
$ B=A+5
$ echo $B
a. 10
b. 5
c. A+5
d. 55
What is the output of the following commands?
$ DATA='some stuff'
$ /bin/sh
$ echo $DATA
a. some
b. stuff

9
c. some stuff
d. A blank line

9
What is the output from the following commands?
$ A=5
$ B=A+5
$ echo $B
Answer is C: A+5
The let command was not used so the input is treated as text "A" instead of
the value of the variable A
What is the output of the following commands?
$ DATA='some stuff'
$ /bin/sh
$ echo $DATA
Answer is D: A blank line
Variables are only available in the current process unless the export command
is used

10
Congratulations! You have completed the session on Bash variables and parameters.

11
In the next session we will examine flow control in Bash.

12
Welcome to Cyber Aces, Module 3! This module provides an introduction to the Bash
Scripting. In this session we will be discussing Flow Control.
This training material was originally developed to help students, teachers, and
mentors prepare for the Cyber Aces Online Competition. This module focuses on the
basics of system administration and scripting. . This session is part of Module 3,
System Administration. This module is split into three sections, Bash, PowerShell, and
Python. In this session, we will continue our examination of Bash.
The three modules of Cyber Aces Online are Operating Systems, Networking, and
System Administration.
For more information about the Cyber Aces program, please visit the Cyber Aces
website at https://CyberAces.org/.
These operators are usually used in a branching mechanism, so that code is executed
only if certain conditions are true.
Not only can you compare numbers, but strings (text) can be compared as well.

Description Number Operator String Operator


Greater Than -gt >
Less Than -lt <
Equal To -eq ==
Not Equal To -ne !=
Less or Equal -le
Greater or Equal -ge
The map of output for a logical AND given inputs of A and B is:
A B Output
True True True
True False False
False True False
False False False
The map of output for a logical OR given inputs of A and B is:
A B Output
True True True
True False True
False True True
False False False
The conditional operators "&&" and "||" allow you to evaluate multiple logic tests
surrounded by square brackets. "&&" is a logical "AND" while "||" is a logical "OR".
For example, the following section of pseudo-code will only evaluate if both tests are
true:
if [ $today == "Sunday" ] && [ $month == "April" ]
then
echo "It is a Sunday in April!!"
fi
This section of pseudo-code will execute if either test evaluates to true:
if [ $today == "Sunday" ] || [ $month == "April" ]
then
echo "It is a a Sunday in any month OR it is
ANY day in the Month of April!!"
fi
Consider the following script named "addnums.sh"
#!/bin/bash
a=$1; b=$2
let c=$a+$b
if [ $c -eq 10 ]
then
let c=500
fi
if [ $c -gt 400 ] && [ $c -lt 600 ]
then
let c=1000
fi
echo $c
1) What is the output of addnums 4 9
2) What is the output of addnums 400 90
Consider the following script named "addnums.sh"
#!/bin/bash
a=$1; b=$2
let c=$a+$b
if [ $c -eq 10 ]
then
let c=500
fi
if [ $c -gt 400 ] && [ $c -lt 600 ]
then
let c=1000
fi
echo $c
1) What is the output of addnums 4 9
Answer: 13
2) What is the output of addnums 400 90
Answer: 1000
The example For loop will read each word of command output or a series/sequence
of numbers. In the example above, the file names are consumed by the For loop. The
variable $FILE will contain each filename, one at a time. This loop simply prints the
file names.
The curly braces ({}) are used to create a sequence. The second loop will count from 1
to 5 by 1. The sequence operator will also take a "step" option, so to count from 0 to
10 by 2 the proper input would be: {0..10..2}.
The two For loops are represented in two different ways. The first has each command
on a separate line. The second options allows all the commands to be entered on the
same line where each command is separated by a semicolon (;).
The example While loop will read a number from a file and increment it until it equals
20. In each iteration of the loop it prints the number on the screen. If the initial
number read from the file is greater than 20 then the loop will never be entered and
there will be no output.
There are several ways to capture and process the output of a command or the
contents of a file. We will talk about two methods here. The first method allows you
to process one word at a time separated by spaces. The second method will process
an entire line at a time marked by a line feed. Before we can process the output, we
need to capture the output. We already talked about one of these methods when we
discussed the use of variables. We can capture the output of one command in a
variable with inline process execution. Let's suppose that we want to capture a listing
of all the files in the current directory and then process that listing repeatedly. We can
capture a directory listing to a variable then process those results several times.
Consider the following bash script that will "cat" every file in the directory, printing its
contents to the screen:
#!/bin/bash
# Above is the 1st line of all bash scripts and specifies the interpreter
# Executes ls using the backtick. Results stored in DIRLIST variable
DIRLIST=`ls`
# The echo command prints the value of DIRLIST for the For loop
# Each time through the loop $i will contain one of the lines from the
# directory listing
for i in `echo $DIRLIST`
# 'do' marks the beginning of the block of code to execute in our FOR loop
do
#Block contains one line that prints the contents of each file
cat i
#done marks the end of our FOR loop
done

But this processes each word separated by spaces in the output. If you want to parse
an entire line, you can use the while loop with the "read" command, which will assign
a full line to a variable as demonstrated in the second script above.
You can run several commands on the same line by separating them with a semicolon
(";"). For example:
$ cd ~; ls -la ; cd /var/log; cat messages
This would change to your home directory, list the files, change to the "/var/log"
directory and print the contents of the "messages" file to the screen.
As mentioned earlier, conditional operators can also be used between commands on
the command line. With "&&", the second command will only be run if the first
command succeeds, and with "||" the second command will only be run if the first
command fails. For example, the following line will print an error if there is a problem
reading the "messages" log:
$ cat /var/log/messages || echo "Error reading messages"
This functionality is due to the "short-circuit" nature of Linux logic. For example, with
the Logical AND the result will be False if any of the input is False, so it will stop
evaluating input once it reaches the first False result. Similarly, a Logical OR will return
True if any input evaluates to True. Once it encounters the first True input it can stop
evaluating the inputs.
$ echo test write > /etc/testfile && rm /etc/testfile &&
echo Everything Worked
This command has three parts and each piece will execute if the previous was
successful. If the creation of /etc/testfile works then, and only then, will the file be
deleted. Only if the deletion was successful (the first command would have to have
been successful as well), would the words "Everything Worked" be output.
Congratulations! You have completed the session on flow control in Bash.
In the next session, we will discuss parsing and searching using Bash.
Welcome to Cyber Aces, Module 3! This module provides an introduction to Bash
Scripting. In this session we will be discussion parsing and searching with Bash.
This training material was originally developed to help students, teachers, and
mentors prepare for the Cyber Aces Online Competition. This module focuses on the
basics of system administration and scripting. . This session is part of Module 3,
System Administration. This module is split into three sections, Bash, PowerShell, and
Python. In this session, we will continue our examination of Bash.
The three modules of Cyber Aces Online are Operating Systems, Networking, and
System Administration.
For more information about the Cyber Aces program, please visit the Cyber Aces
website at https://CyberAces.org/.
Is this section, you will be introduced to ways to parse input files or output from
other commands. We will also cover ways to search for files based on their attributes,
searching within files, and a very powerful way of searching using regular expressions.
You can use cut to grab portions of text out of a string using the "cut" command. The
"-d" parameter specifies the "delimiter" to use to separate the string, and the "-f"
parameter specifies the "FIELD" number to use from the delimited string. Consider
the following portion of code:
$ A='THIS$IS^A$TEST$STRING$EXAMPLE'
$ Fld1=`echo $A | cut -d$ -f1`
$ Fld2=`echo $A | cut -d$ -f2`
$ Fld3=`echo $A | cut -d$ -f3`
$ echo "The first field separated by a dollar sign is $Fld1"
$ echo "The second field separated by a dollar sign is $Fld2"
$ echo "The third field separated by a dollar sign is $Fld3"
$ Fld4=`echo $A | cut -d^ -f1`
$ echo "The first field separated by a carat (^) is $Fld4"
This will result in the following output:
The first field separated by a dollar sign is THIS
The second field separated by a dollar sign is IS^A
The third field separated by a dollar sign is TEST
The first field separated by a carat (^) is THIS$IS
This is very useful when you want to grab a piece of text from the output of a
command or a file.
"AWK" is a very powerful command that has a scripting language all its own. The full
use of AWK is beyond the scope of this introduction, but we will give examples of a
few common uses of AWK in scripting. A frequent use of the AWK command in scripts
is to isolate portions of text. Consider this code executed at a Bash prompt:
$ echo "this is a test" | awk '{print $2,$4};'
is test
The AWK script contained inside the brackets prints the second parameter and the
fourth parameter of AWK's input, resulting in "is test" being printed to the screen.
Check out these other creative ways to use AWK to manipulate text.
https://www.redsiege.com/ca/awk
The "grep" command is used to find all instances of a specific string in files. Grep will
accept a regular expression and a list of files as its parameters.
Imagine the following directories stores logs that might have recorded the attackers
IP address:
• /var/log/httpd/accept/log
• /var/log/ossec/alerts/log
You want to find all the files that contain the IP address 192.168.1.1. You could use
the following grep statement:
$ grep -iE "192\.168\.1\.1" /var/log/*/*/log/*
It is worth mentioning that if you're going to search for IP addresses in logs, you
should be aware of the format of the IP addresses in your logs. For example, an
application may record your IP address as 192.168.1.1, or it might record it as
192.168.001.001. Take a few minutes to review your data source and make sure you
build your regular expressions properly.
There are a number of special special characters used in Regular Expressions that
allow for very granular searches.
^ The beginning of the line
$ The end of line
| An Or
[] A set of characters; it can have multiple. i.e to match hex output [0-9a-f]
- A range in a set (see above)
^ Inverts a set. For example, to match anything that isn't a number use [^0-9]
. Any single character
* The preceding item will be matched 0 or more times
+ The preceding item will be matched 1 or more times
\ Used to "escape" another character so it will not be interpreted as a special
character.
i.e. An IP address 10\.10\.10\.10
{,} Used to match the preceding a specific number (or range) or times
Regular Expression are extremely useful and powerful. To get a perfect match takes
some time, so many times we look for a "good enough" search when looking for
results for the sake of speed and efficiency. Below each example is explained.
Social Security Number is in the format of ###-##-#### but the dashes can be
removed or be replaced with spaces. So we need 3 numbers, an optional dash or
space, 2 more numbers, an optional dash or space, and another 4 numbers.
[0-9]{X} will match exactly X number of numbers
[- ]? will accept a dash or a space if it exists
This will accept match input in the following:
123-45-6789
123 45 6789
123456789
123 45-6789
12345 6789
The credit card example is very similar. We need a number in the form of ####-####-
####-#### but the dash may be omitted or replaced with spaces.
([0-9]{4}[- ]?) looks for a set of 4 numbers followed by an optional dash or
space
([0-9]{4}[- ]?){3} matches the set above three times. It would match the first
12 (4*3) digits in our credit card number. We need a final four digits ([0-9]{4})
to complete our credit card number
The IPv4 address is quite similar to the Credit Card in that we look for the "###."
three times before looking for the final ###. Of course, each octet can contain
between 1 and 3 numbers. Also, the dot is a special character and we need to match
the dot exactly (not any character) so we have to escape it with the backslash (\).
Another routine task is to perform some action on a set of files that match some
criteria. For example, it may be necessary to move all files that are bigger than a
specific size or older than a specific date to an archived location.
An easy way to do this is with the "-exec" parameter of the find command. For
example, the following command finds all files in your home directory that end in
".txt" (case insensitive because it is "-iname" instead of "-name") and print the
contents of those files to the screen with cat. The brackets '{}' are replaced with
results of the 'find' command.
$ find ~ -iname '*.txt' -exec cat '{}' \;
Read this article on the use of the find command:
http://dsl.org/cookbook/cookbook_10.html
This works fine in many circumstances. However, a very long list of file names from
the find command will cause find to produce an error rather than execute the
commands. Find is also limited to executing the commands one at a time. If you run
into either of these problems, you can use "xargs" to execute your code. "xargs" adds
the output of the previous command to the end of a second command as its
parameters. So we could rewrite our find command above like this:
$ find ~ -iname '*.txt' | xargs cat
Which of the following regular expressions would find Orville or Wilbur Wright on a
line by itself?
a. $(Orville|Wilbur) Wright^
b. ^(Orville or Wilbur) Wright$
c. ^(Orville|Wilbur) Wright$
d. $[Orville|Wilbur] Wright$^
Which regular expression would match everything between double quotes in this
example text?
His nicknames are "Matt" and "Dawg"
a. "[^"]+"
b. "[A-Z]*"
c. ".*"
d. "*"
Which of the following regular expressions would find Orville or Wilbur Wright on a
line by itself?
Answer C
^(Orville|Wilbur) Wright$
The ^ matches the beginning of the line and the $ matches the end
The parenthesis used with the pipe (OR) will match either full word
Which regular expression would match match everything between double quotes?
Answer A
"[^"]+"
This will match a double quote, any text that isn't a double quote, and then the
final double quote and find both Matt and Dawg
The ".*" will work to a point, but it wouldn't work properly on this example as it
would grab all this text (including the and):.
"Matt" and "Dawg"
Regular Expressions are, by default, greedy, meaning it will grab as many
characters as possible. This is why it will also grab the and. It will start with the
double quote before the M and continue until the double quote after the g.
Congratulations! You have completed the session on parsing and searching using
Bash.
In the next session we will discuss practical uses for Bash and Bash scripts.
Welcome to Cyber Aces, Module 3! This module provides an introduction to Bash
Scripting. In this module we'll use the knowledge we've gained in some practical
examples.
This training material was originally developed to help students, teachers, and
mentors prepare for the Cyber Aces Online Competition. This module focuses on the
basics of system administration and scripting. . This session is part of Module 3,
System Administration. This module is split into three sections, Bash, PowerShell, and
Python. In this session, we will continue our examination of Bash.
The three modules of Cyber Aces Online are Operating Systems, Networking, and
System Administration.
For more information about the Cyber Aces program, please visit the Cyber Aces
website at https://CyberAces.org/.
Is this section we will go through some practical scripts.
One of the cardinal rules of information security is to "know thy system". This means
that you know the processes and files that are executing or stored on your system in
a normal state. Then, when an abnormal condition arises, you will recognize the
anomaly. Knowing thy system also means that you monitor the logs on your system.
In this section, we will look at useful techniques when processing large numbers of
log files.
One of the command ways to discover attacks and compromise is to look for things
that are out of the ordinary. Looking at SSH logs and seeing an increased number of
failed login attempts can show that the system is under attack and could potentially
lead you to compromised systems if the login attempts are coming from internal
systems.
$ cat /var/log/secure | grep -E "Failed password for [a-
z]+" | wc -l
This command will output the file /var/log/secure and pipe it into grep. The grep
command will search for "Failed password for " followed by at least 1 alphabet
character. The results are piped into wc with the -l option to count the number of
lines of output.
The script explained:

#!/bin/bash
# Above is the standard first line of a script
# The file we will be working on
file=/var/log/secure
# The for loop will operate on each line of output using the variable $fail.
# The output is from:
# - cat the file
# - search the file using a regex search (-E) for "Failed password for "
# followed by alpha characters and only return the match not the full line
# of text (-o)
# - sort the output and get unique lines
# Each time through the loop the $fail variable will contain something
# similar to "Failed password for root"
for failed in `cat $file | grep -oE "Failed password for [a-z]+" | cut -f4 -d' ' | sort
| uniq`
do
# Output search string, then the output of the next command (in backticks)
# The grep command will search the file again for lines matching the
# current failed user and return the number of lines matched (wc -l). The
# space is added after $failed so that users tim won’t match tim and timmy
echo $failed `grep "Failed password for $failed " $file | wc -l`
# all done with the For loop
done
This example uses nested loops, or one loop inside another loop. The third octet is a
number between 0 and 6 and the fourth is between 1 and 254 (Note: we are
intentionally skipping 0 and 255).
The ping command will ping the address twice (-c 2) and all output is discarded
(redirected to /dev/null). If the ping is successful (has at least 1 successful response)
the Echo command will execute and display the IP address that is up.
SED is a handy command used to modify files or output. It is an extremely powerful
tool and even has its own scripting language.
One of the nicest features of SED is its ability to modify a file and create a backup.
This is a useful safety tool in case the search and replace is typed incorrectly. If you do
not want to create a backup you can use -i'' (that's two single quotes not a single
double quote) to specify no backup extension.
If we want to make a webserver listen on another port we can use sed to find the
"Listen" line in the configuration file and replace it. If we use a regular expression we
don't need to know the original listening port.
Command:
sed -i'.bak' -E 's/Listen [0-9]+/Listen 8080/' httpd.conf
Command Explained:
-i'.bak' create a backup file with the extension .bak
-E is the regular expression to use for our search and replace
s perform a search
/^Listen [0-9]+/ Look for lines starting with the word Listen,
a space, followed by one or more digits
/Listen 8080/ Replace with the literal string "Listen 8080"
httpd.conf The file to modify
We can execute this remotely on a machine with SSH. If you pass a command to SSH,
it will execute that command on the remote host after it logs in. For example:
$ ssh username@host 'ls -al'
This command will login to 'host' with the username 'username' and will execute the
command "ls -la" on the remote host. But what about entering the password? You
really don't want to put your password in clear text into a script file. Instead, you
would configure SSH private keys so that you can login remotely using the key pairs
rather than a password.
Now, combining this with our script above that goes through all of the computers, we
come up with the following:
#!/bin/bash
# Simple Counting Loops
for thirdoctet in {0..6}
do
for lastoctet in {1..254}
do
ssh -i timpriv.key [email protected].$thirdoctet.$lastoctet
"sed -i'.bak' -E 's/Listen [0-9]+/Listen 8080/g'
/etc/httpd.conf"
done
done
As a system administrator, you may want to restart a service on multiple computers
at one time. Our sample script above can easily be modified to accomplish this. The
init.d system is used to start and stop services on many Linux distributions. To restart
the HTTPD service on all of the computers in our IP range, we could modify our script
as follows:
#!/bin/bash
#Simple Counting Loops
for third in {0..6}
do
for fourth in {1..254}
do
ssh -i timpriv.key [email protected].$third.$fourth "sed -
i'.bak' -E 's/Listen [0-9]+/Listen 8080/g' httpd.conf &&
/etc/init.d/apache2 restart"
done
done
We could even combine the two commands with an && and have it modify the config
file and then restart the service if the modification is successful.
#!/bin/bash
#Simple Counting Loops
for third in {0..6}
do
for fourth in {1..254}
do
ssh [email protected].$third.$fourth "sed -i'.bak' -E 's/Listen
[0-9]+/Listen 8080/g' httpd.conf && /etc/init.d/apache2
restart"
done
done
Now we will look at how to do something a little more complex. Imagine that we
want to run multiple commands and process the results. SSH only allows for the
execution of a single command. This time, we want to execute multiple commands
on the remote machine. To complicate things even further, this time imagine that we
have several different custom installations of Linux with the KILLALL and PS
commands in different directories. We will add some code to find "ps", kill our
process, and verify that our process was killed. The commands we want to execute on
the remote system look like this:
#!/bin/bash
PATH_TO_PS=`which ps`; PATH_TO_KILLALL=`which killall`
# find our ps and killall commands and store their location
if [ $PATH_TO_PS == '' ] || [ $PATH_TO_KILLALL == '' ]; then
# if we didn't find one of them, record an error
echo "One of our commands is missing on $HOSTNAME"
else
# can't find programs, use killall to kill the nc process
$PATH_TO_KILLALL nc
PROCCOUNT=`$PATH_TO_PS a | grep nc | wc -l`
#we count the number of nc processes that are running
if [ $PROCCOUNT -gt 1 ]
#If we have more than 1 (grep will be one) then echo a
warning
then
echo "Still running on $HOSTNAME"
fi
fi
Our last topic in discussing Bash is interacting with a remote computer. Sometimes it
is necessary to interact with the remote computer, rather than simply sending a
single command. For example, if we are trying to login to a remote computer, you
need your script to be able to "spawn" ssh, then have it wait for a login prompt
before it transmits the username. After it enters the username, it needs to wait until
the password prompt appears before it transmits the passwords. This problem can be
solved using "expect".
For additional reading on this subject read this page: https://redsiege.com/ca/expect
Enable the OpenSSH server on your system with the following commands then
answer the questions below.
You will then connect to your own system via SSH to simulate running the commands
on a remote server.
$ sudo systemctl start sshd
$ passwd
<enter a new password for yourself>
1) Write a command that will login to your system via SSH and display the contents of
the "/etc" directory.
2) Create a script named "remoteclean.sh" in /home/cyberaces/ containing the
command:
echo Clean Complete
Then write a command that will execute the local script
"/home/cyberaces/remoteclean.sh" using SSH.
1) Write a command that will login to your system via SSH and display the contents
of the "/etc" directory.
ssh [email protected] 'ls /etc'
The command is very similar to logging to an SSH server, the only difference is the
command, wrapped in quotes, at the end.
2) You want to execute the local script "/home/cyberaces/remoteclean.sh" on a
remote machine. Which of the following ssh commands would you use?
ssh [email protected] "bash -s" < /home/cyberaces/remoteclean.sh

Similar to the command above, except the command we want to execute "bash -
s". The script will be fed into the shell and executed on the remote system.
This concludes BASH Scripting. We've learned about BASH and how to automate tasks
in Linux by writing scripts. Automation make tasks faster, more accurate, and less
mundane.
The next module is on PowerShell.
This is the conclusion of the Bash Scripting Module.

You might also like