VAPT - Unit2
VAPT - Unit2
VAPT – Unit 2
SEMESTER - 5
In this chapter we begin the information-gathering phase of penetration testing. The goal of
this phase is to learn as much about our clients as we can.
We’ll also start to interact with our target systems, learning as much as we can about them
without actively attacking them. We’ll use the knowledge gained in this phase to move on to
the threat-modeling phase where we think like attackers and develop plans of attack based
on the information we’ve gathered. Based on the information we uncover, we’ll actively
search for and verify vulnerabilities using vulnerability-scanning techniques, which are
covered in the next chapter. Notes for STC-DCFS by Dr.Sajad
4
Open Source Intelligence Gathering:
We can learn a good deal about our client’s organization and infrastructure before we send
a single packet their way, but information gathering can still be a bit of a moving target. It
isn’t feasible to study the online life of every employee, and given a large amount of
gathered information, it can be difficult to discern important data from noise. If the CEO
tweets frequently about a favorite sports team, that team’s name may be the basis for her
webmail password, but it could just as easily be entirely irrelevant. Other times it will be
easier to pick up on something crucial. For instance, if your client has online job postings for
a system administrator who is an expert in certain software, chances are those platforms
are deployed in the client’s infrastructure.
As opposed to intelligence gained from covert sources such as dumpster diving, dumping
website databases, and social engineering, open source intelligence (or OSINT) is gathered
from legal sources like public records and social media. The success of a pentest often
depends on the results of the information-gathering phase, so in this section, we will look at
a few tools to obtain interesting information from these public sources.
Notes for STC-DCFS by Dr.Sajad
Open Source Intelligence Gathering: 5
Netcraft
Sometimes the information that web servers and web-hosting companies gather and make
publicly available can tell you a lot about a website. For instance, a company called Netcraft
logs the uptime and makes queries about the underlying software. (This information is
made publicly available at http://www.netcraft.com/.) Netcraft also provides other services,
and their antiphishing offerings are of particular interest to information security.
For example, the following figure shows the result when we query:
http://www.netcraft.com/ for http://www.bulbsecurity.com.
As you can see, bulbsecurity.com was first seen in March 2012. It was registered through
GoDaddy, has an IP address of 50.63.212.1, and is running Linux with an Apache web
server.
Registrant: ❶
Domains By Proxy, LLC DomainsByProxy.com
14747 N Northsight Blvd Suite 111, PMB 309
Scottsdale, Arizona 85260 United States Notes for STC-DCFS by Dr.Sajad
8
Open Source Intelligence Gathering:
Whois Lookups:
Technical Contact: ❷
Private, Registration [email protected] Domains By Proxy, LLC
DomainsByProxy.com
14747 N Northsight Blvd Suite 111, PMB 309
Scottsdale, Arizona 85260 United States
(480) 624-2599 Fax -- (480) 624-2598
This site has private registration, so both the registrant ❶ and technical contact ❷ are domains by
proxy. Domains by proxy offer private registration, hiding your personal details in the Whois
information for the domains you own. However, we do see the domain servers ❸ for
bulbsecurity.com.
Running Whois queries against other domains will show more interesting results. For example, if
you do a Whois lookup on georgiaweidman.com, you might get an interesting blast from the past,
including the college phone number.
We can also use Domain Name System (DNS) servers to learn more about a domain. DNS servers
translate the human-readable URL www.bulbsecurity.com into an IP address.
Nslookup:
For example, we could use a command line tool such as Nslookup, as shown below:
root@Kali:~# nslookup www.bulbsecurity.com
Server: 75.75.75.75
Address: 75.75.75.75#53
Non-authoritative answer:
www.bulbsecurity.com canonical name = bulbsecurity.com.
Name: bulbsecurity.com
Address: 50.63.212.1 ❶
Notes for STC-DCFS by Dr.Sajad
11
Open Source Intelligence Gathering:
We can also tell Nslookup to find the mail servers for the same website by looking for MX records
(DNS speak for email), as shown below:
root@kali:~# nslookup
> set type=mx
> bulbsecurity.com
Server: 75.75.75.75
Address:75.75.75.75#53
Non-authoritative answer:
bulbsecurity.com mail exchanger = 40 ASPMX2.GOOGLEMAIL.com.
bulbsecurity.com mail exchanger = 20 ALT1.ASPMX.L.GOOGLE.com.
bulbsecurity.com mail exchanger = 50 ASPMX3.GOOGLEMAIL.com.
bulbsecurity.com mail exchanger = 30 ALT2.ASPMX.L.GOOGLE.com.
bulbsecurity.com mail exchanger = 10 ASPMX.L.GOOGLE.com
Nslookup says bulbsecurity.com is using Google Mail for its email servers.
This output shows us all the DNS servers for zoneedit.com. Naturally, because this domain was set
up to demonstrate zone transfers, that’s what we are going to do next.
Notes for STC-DCFS by Dr.Sajad
14
Open Source Intelligence Gathering:
Zone Transfers
DNS zone transfers allow name servers to replicate all the entries about a domain. When setting up
DNS servers, you typically have a primary name server and a backup server. What better way to
populate all the entries in the secondary DNS server than to query the primary server for all of its
entries?
Unfortunately, many system administrators set up DNS zone transfers insecurely, so that anyone
can transfer the DNS records for a domain. zoneedit.com is an example of such a domain, and we
can use the host command to download all of its DNS records. Use the -l option to specify the
domain to transfer, and choose one of the name servers from the previous command, as shown in
below listing:
There are pages and pages of DNS entries for zoneedit.com, which gives us a good idea of where
to start in looking for vulnerabilities for our pen test. For example, mail2.zoneedit.com is probably a
mail server, so we should look for potentially vulnerable software running on typical email ports such
as 25 (Simple Mail Transfer Protocol) and 110 (POP3). If we can find a web mail server, any
usernames we find may lead us in the right direction so that we can guess passwords and gain
access to sensitive company emails.
External penetration tests often find fewer services exposed than internal ones do. A good security
practice is to expose only those services that must be accessed remotely, like web servers, mail
servers, VPN servers, and maybe SSH or FTP, and only those services that are mission critical.
Services like these are common attack surfaces, and unless employees use two-factor
authentication, accessing company webmail can be simple if an attacker can guess valid
credentials.
One excellent way to find usernames is by looking for email addresses on the Internet. You might be
surprised to find corporate email addresses publicly listed on parent-teacher association contact
info, sports team ros- ters, and, of course, social media.
Notes for STC-DCFS by Dr.Sajad
18
Open Source Intelligence Gathering:
You can use a Python tool called “the Harvester” to quickly scour thousands of search engine
results for possible email addresses. theHarvester can automate searching Google, Bing, PGP,
LinkedIn, and others for email addresses. For example, in the below sheet, we’ll look at the first 500
results in all search engines for bulbsecurity.com.
There’s not too much to be found for bulbsecurity.com, but theHarvester does find an email address,
[email protected], and the website, www.bulbsecurity.com, as well as other websites that
share virtual hosting with. You may find more results if you run theHarvester against your
organization.
Maltego
Paterva’s Maltego is a data-mining tool designed to visualize open source intelligence gathering.
Maltego has both a commercial and a free community edition. The free Kali Linux version, which we
use, limits the results it returns, but we can still use it to gather a good deal of interesting information
very quickly.
(The paid version offers more results and functionality. To use Maltego on your pentests, you will
need a paid license.)
.
Notes for STC-DCFS by Dr.Sajad
22
Open Source Intelligence Gathering: Maltego
To run Maltego, enter maltego at the command line. The Maltego GUI should launch. You will be
prompted to create a free account at the Paterva website and log in. Once logged in, choose Open
a blank graph and let me play around, and then click Finish.
Maltego correctly finds www.bulbsecurity.com. Attacking the Google Mail servers will likely be out of
the scope of any pentest, but more information on the www.bulbsecurity.com website would
certainly be useful.
We can run transforms on any entity on the graph, so select the website www.bulbsecurity.com to
gather data on it. For instance, we can run the transform ToServerTechnologiesWebsite to see
what software www.bulbsecurity.com is running, Notes for STC-DCFS by Dr.Sajad
25
Open Source Intelligence Gathering: Maltego
Maltego finds that www.bulbsecurity.com is an Apache web server with PHP, Flash, and so on,
along with a WordPress install. WordPress, a commonly used blogging platform, has a long history
of security issues (like a lot of software). We’ll look at exploiting website vulnerabilities later.
You can find additional information and tutorials about Maltego at http://www.paterva.com/. Spend
some time using Maltego transforms to find interesting information about your organization in your
Lab IX. In skilled hands, Maltego can turn hours of reconnaissance work into minutes with the same
quality results.
When you start a pentest, the potential scope is practically limitless. The client could be running any
number of programs with security issues: They could have misconfiguration issues in their
infrastructure that could lead to compromise; weak or default passwords could give up the keys to
the kingdom on otherwise secure systems; and so on. Pentests often narrow your scope to a
particular IP range and nothing more, and you won’t help your client by developing a working exploit
for the latest and greatest server-side vulnerability if they don’t use the vulnerable software. We
need to find out which systems are active and which software we can talk to.
Everything we have done so far is completely legal. But once we start actively querying systems, we
are moving into murky legal territory. Attempting to break into computers without permission is, of
course, illegal in many countries. Though stealthy scan traffic may go unnoticed, you should practice
the skills we study in the rest of this chapter (and the rest of this book) only on your target virtual
machines or other systems you own or have written permission to test (known in the trade as a get-
out-of-jail-free card).
Nmap is an industry standard for port scanning. Entire books have been written just about using
Nmap, and the manual page may seem a bit daunting. We will cover the basics of port scanning
here and come back to the tool in later chapters.
Firewalls with intrusion-detection and prevention systems have made great strides in detecting and
blocking scan traffic, so you might run an Nmap scan and receive no results at all. Though you
could be hired to perform an external pentest against a network range with no live hosts, it’s more
likely that you’re being blocked by a firewall. On the other hand, your Nmap results might instead
say that every host is alive, and will be listening on every port if your scan is detected.
Next, as you can see in below coding at next page, we specify the IP address(s) or range to scan.
Finally, we use the -o option to output our Nmap results to a file. The -oA option tells Nmap to log
our results in all formats: .nmap, .gnmap (grep- pable Nmap), and XML. Nmap format, like the
output that Nmap prints to the screen in below coding, is nicely formatted and easy to read.
Greppable Nmap (as the name implies) is formatted to be used with the grep utility to search for
specific information. XML format is a standard used to import Nmap results into other tools. Listing
5-6 shows the results of the SYN scan.
Note: It is always a good idea to take good notes of everything we do on our pentest Tools
As you can see, Nmap returns a handful of ports on the Windows XP and Linux boxes. We will see
as we move through the next few chapters that nearly all of these ports contain vulnerabilities.
Hopefully, that won’t be the case on your pentests, but in an attempt to introduce you to many types
of vulnerabilities you will encounter in the field, our pentesting lab has been condensed into these
three machines.
That said, just because a port is open does not mean that vulnerabilities are present. Rather it
leaves us with the possibility that vulnerable software might be running on these ports. Our Windows
7 machine is listening only on port 80 ❶, the traditional port for HTTP web servers, and port 139 for
remote procedure call. There may be exploitable software listening on ports that are not
allowed through the Windows firewall, and there may be vulnerable software running locally on
the machine, but at the moment we can’t attempt to exploit anything directly over the network
except the web server.
Notes for STC-DCFS by Dr.Sajad
37
Open Source Intelligence Gathering: Port Scanning:
This basic Nmap scan has already helped us focus our pentesting efforts.
Both the Windows XP and Linux targets are running FTP servers ❷, web servers ❸, and SMB
servers ❹. The Windows XP machine is also running a mail server that has opened several ports
❺ and a MySQL server ❻.
Our SYN scan was stealthy, but it didn’t tell us much about the software that is actually running on
the listening ports. Compared to the detailed version information we got by connecting to port 25
with Netcat, the SYN scan’s results are a bit lackluster. We can use a full TCP scan (nmap -sT) or
go a step further and use Nmap’s version scan (nmap -sV) to get more data.
With the Version Scan, Nmap completes the connection and then attempts to determine what
software is running and, if possible, the version, using techniques such as banner grabbing.
Pls refer coding in next sheet:
This time we gained much more information about our Windows XP and Linux targets. For example,
we knew there was an FTP server on the Linux box, but now we have reasonable assurance that
the FTP server is Very Secure FTP version 2.3.4 ❶. We’ll use this output to search for potential
vulnerabilities. As for our Windows 7 system, we found out only that it’s running Microsoft IIS 7.5, a
fairly up-to-date version. It’s possible to install IIS 8 on Windows 7, but it’s not officially supported.
The version itself would not raise any red flags to me. We will find that the application installed on
this IIS server is the real issue.
Both Nmap’s SYN and version scans are TCP scans that do not query UDP ports. Because UDP is
connectionless, the scanning logic is a bit different.
In a UDP scan (-sU), Nmap sends a UDP packet to a port. Depending on the port, the packet sent is
protocol specific. If it receives a response, the port is considered open. If the port is closed, Nmap
will receive an ICMP Port Unreachable message. If Nmap receives no response whatsoever, then
either the port is open and the program listening does not respond to Nmap’s query, or the traffic is
being filtered. Thus, Nmap is not always able to distinguish between an open UDP port and one that
is filtered by a fire- wall.
Starting Nmap 6.40 ( http://nmap.org ) at 2015-12-18 09:03 EST Nmap scan report for
192.168.20.10
Host is up (0.00031s latency).
PORT STATE SERVICE
3232/tcp open unknown
MAC Address: 00:0C:29:A5:C1:24 (VMware)
Notes for STC-DCFS by Dr.Sajad
43
Open Source Intelligence Gathering: Port Scanning: Scanning Specific Ports:
Sure enough, when we tell Nmap to scan 3232, it returns open, which shows that this port is worth
checking out, in addition to the default Nmap scanned ports. However, if we try to probe the port a
bit more aggressively with a version scan, the service listening on the port crashes, as shown below:
root@kali:~# nmap -p 3232 -sV 192.168.20.10
Starting Nmap 6.40 ( http://nmap.org ) at 2015-04-28 10:19 EDT Nmap scan report for 192.168.20.10
Host is up (0.00031s latency). PORT STATE SERVICE VERSION
3232/tcp open unknown
1 service unrecognized despite returning data❶. If you know the service/ version, please submit the following fingerprint at
http://www.insecure.org/ cgi-bin/servicefp-submit.cgi : ❷
SF-Port3232-TCP:V=6.25%I=7%D=4/28%Time=517D2FFC%P=i686-pc-linux-gnu%r(GetR
SF:equest,B8,"HTTP/1\.1\x20200\x20OK\r\nServer:\x20Zervit\x200\.4\r\n❸X-Pow
SF:ered-By:\x20Carbono\r\nConnection:\x20close\r\nAccept-Ranges:\x20bytes\ SF:r\nContent-Type:\x20text/html\r\
nContent-Length:\x2036\r\n\r\n<html>\r\ SF:n<body>\r\nhi\r\n</body>\r\n</html>");
MAC Address: 00:0C:29:13:FA:E3 (Vmware)
Notes for STC-DCFS by Dr.Sajad
44
Open Source Intelligence Gathering: Port Scanning: Scanning a Specific Port:
In the process of crashing the listening service, Nmap can’t figure out what software is running as
noted at ❶, but it does manage to get a finger- print of the service. Based on the HTML tags in the
fingerprint at ❷, this service appears to be a web server. According to the Server: field, it is some-
thing called Zervit 0.4 ❸.
At this point, we have crashed the service, and we may never see it again on our pentest, so any
potential vulnerabilities may be a moot point. Of course, in our lab we can just switch over to our
Windows XP target and restart the Zervit server.
For Red Team campaigns, it is often about opportunity of attack. Not only do you need to have your attack infrastructure
ready at a whim, but you also need to be constantly looking for vulnerabilities. This could be done through various tools
that scan the environments, looking for services, cloud misconfigurations, and more. These activities allow you to gather
more information about the victim’s infrastructure and find immediate avenues of attack.
For client networks that are generally not too large, we set up simple cronjob to perform external port diffing. For
example, we could create a quick Linux bash script to do the hard work (remember to replace the IP range):
#!/bin/bash
mkdir optnmap_diff
d=$(date +%Y-%m-%d)
y=$(date -d yesterday +%Y-%m-%d)
usrbin/nmap -T4 -oX optnmap_diff/scan_$d.xml 10.100.100.0/24 >
devnull 2>&1
if [ -e optnmap_diff/scan_$y.xml ]; then
usrbin/ndiff optnmap_diff/scan_$y.xml optnmap_diff/scan_$d.xml > optnmap_diff/diff.txt
fi
This is a very basic script that runs nmap every day using default ports and then uses ndiff to
compare the results. We can then take the output of this script and use it to notify our team of new
ports discovered daily.
The reason HTTPScreenshot is so powerful is that it uses Masscan to scan large networks quickly and uses phantomjs
to take screencaptures of any websites it detects. This is a great way to get a quick layout of a large internal or external
network.
Please remember that all tool references in this book are run from the modified Kali Virtual Machine.
cd opthttpscreenshot/
Edit the networks.txt file to pick the network you want to scan: gedit networks.txt
./masshttp.sh
firefox clusters.html
The other tool to check out is Eyewitness (https://github.com/ChrisTruncer/EyeWitness). Eyewitness is another great
tool that takes an XML file from nmap output and screenshots webpages, RDP servers, and VNC Servers.
The commands used for the above tools are:
cd optEyeWitness
nmap [IP Range]/24 --open -p 80,443 -oX scan.xml
As more and more companies switch over to using different cloud infrastructures, a lot of new and old attacks come to
light. This is usually due to misconfigurations and a lack of knowledge on what exactly is publicly facing on their cloud
infrastructure. Regardless of Amazon EC2, Azure, Google cloud, or some other provider, this has become a global trend.
For Red Teamers, a problem is how do we search on different cloud environments? Since many tenants use dynamic
IPs, their servers might not only change rapidly, but they also aren’t listed in a certain block on the cloud provider. For
example, if you use AWS, they own huge ranges all over the world. Based on which region you pick, your server will
randomly be dropped into a /13 CIDR range. For an outsider, finding and monitoring these servers isn't easy.
To find cloud servers, there are many great resources freely available on the internet to perform reconnaissance on our
targets. We can use everything from Google all the way to third party scanning services. Using these resources will allow
us to dig into a company and find information about servers, open services, banners, and other details passively. The
company will never know that you queried for this type of information. Let’s see how we use some of these resources as
Red Teamers.
Shodan
Shodan (https://www.shodan.io) is a great service that regularly scans the internet, grabbing banners, ports, information
about networks, and more. They even have vulnerability information like Heartbleed. One of the most fun uses for
Shodan is looking through open web cams and playing around with them. From a Red Team perspective, we want to find
information about our victims.
Censys continually monitors every reachable server and device on the Internet, so you can search for and analyze them
in real time. You will be able to understand your network attack surface, discover new threats, and assess their global
impact [https://censys.io/]. One of the best features of Censys is that it scrapes information from SSL certificates.
Typically, one of the major difficulties for Red Teamers is finding where our victim's servers are located on cloud servers.
Luckily, we can use Censys.io to find this information as they already parse this data.
The one issue we have with these scans is that they can sometime be days or weeks behind. In this case, it took one
day to get scanned for title information. Additionally, after creating an SSL certificate on my site, it took four days for the
information to show up on the Censys.io site. In terms of data accuracy, Censys.io was decently reliable.
Below, we ran scans to find info about our target cyberspacekittens.com. By parsing the server's SSL certificate, we
were able to identify that our victim's server was hosted on AWS.
We commonly find that companies do not realize what they have available on the internet. Especially with the increase of
cloud usage, many companies do not have ACLs properly implemented. They believe that their servers are protected,
but we discover that they are publicly facing. These include Redis databases, Jenkin servers, Tomcat management,
NoSQL databases, and more – many of which led to remote code execution or loss of PII.
The cheap and dirty way to find these cloud servers is by manually scanning SSL certificates on the internet in an
automated fashion. We can take the list of IP ranges for our cloud providers and scan all of them regularly to pull down
SSL certificates. Looking at the SSL certs, we can learn a great deal about an organization. From the scan below of the
cyberspacekittens range, we can see hostnames in certificates with .int. for internal servers, .dev. for development, vpn.
for VPN servers, and more. Many times you can gain internal hostnames that might not have public IPs or whitelisted IPs
for their internal networks.
To assist in scanning for hostnames in certificates, sslScrape was developed for THP3. This tool utilizes Masscan to
quickly scan large networks. Once it identifies services on port 443, it then strips the hostnames in the certificates.
sslScrape (https://github.com/cheetz/sslScrape):
cd optsslScrape
python ./sslScrape.py [IP Address CIDR Range]
In terms of identifying IP ranges, we can normally look up the company from public sources like the
American Registry for Internet Numbers (ARIN) at https://www.arin.net/. We can look up IP address
space to owners, search Networks owned by companies, Autonomous System Numbers by
organization, and more. If we are looking outside North America, we can look up via AFRINIC
(Africa), APNIC (Asia), LACNIC (Latin America), and RIPE NCC (Europe). These are all publicly
available and listed on their servers.
You can look up any hostname or FQDN to find the owner of that domain through many available
public sources. What you can't find listed anywhere are subdomains. Subdomain information is
stored on the target's DNS server versus registered on some central public registration system. You
have to know what to search for to find a valid subdomain.
Some servers do not respond by IP. They could be on shared infrastructure and only respond by
fully qualified domains. This is very common to find on cloud infrastructure. So you can nmap all
day, but if you can’t find the subdomain, you won't really know what applications are behind that IP.
Subdomains can provide information about where the target is hosting their servers. This is done by
finding all of a company's subdomains, performing reverse lookups, and finding where the IPs are
hosted. A company could be using multiple cloud providers and datacenters.
We must get a good idea of all the servers and domains a company might use. Although there isn’t a central place where
subdomains are stored, we can bruteforce different subdomains with a tool, such as Knock, to identify what servers or
hosts might be available for attack.
Knockpy is a python tool designed to enumerate subdomains on a target domain through a wordlist.
Knock is a great subdomain scan tool that takes a list of subdomains and checks it to see if it resolves. So if you have
cyberspacekittens.com, Knock will take this wordlist (http://bit.ly/2JOkUyj), and see if there are any subdomains for
[subdomain].cyberspacekittens.com. Now, the one caveat here is that it is only as good as your word list. Therefore,
having a better wordlist increases your chances of finding subdomains.
We must get a good idea of all the servers and domains a company might use. Although there isn’t a central place where
subdomains are stored, we can bruteforce different subdomains with a tool, such as Knock, to identify what servers or
hosts might be available for attack.
Knockpy is a python tool designed to enumerate subdomains on a target domain through a wordlist.
Knock is a great subdomain scan tool that takes a list of subdomains and checks it to see if it resolves. So if you have
cyberspacekittens.com, Knock will take this wordlist (http://bit.ly/2JOkUyj), and see if there are any subdomains for
[subdomain].cyberspacekittens.com. Now, the one caveat here is that it is only as good as your word list. Therefore,
having a better wordlist increases your chances of finding subdomains.
This is where we can use a tool like Sublist3r. Note, using a tool like this uses different "google dork" style search
queries that can look like a bot. This could get you temporarily blacklisted and require you to fill out a captcha with every
request, which may limit the results from your scan. To run Sublister:
cd optSublist3r
python sublist3r.py -d cyberspacekittens.com -o cyberspacekittens.com
*There is a forked version of Sublist3r that also performs subdomain checking: https://github.com/Plazmaz/Sublist3r.
The last subdomain tool is called SubBrute. SubBrute is a community-driven project with the goal of creating the fastest,
and most accurate subdomain enumeration tool. Some of the magic behind SubBrute is that it uses open resolvers as a
kind of proxy to circumvent DNS rate-limiting (https://www.us- cert.gov/ncas/alerts/TA13-088A). This design also
provides a layer of anonymity, as SubBrute does not send traffic directly to the target's name servers.
[https://github.com/TheRook/subbrute]
Not only is SubBrute extremely fast, it performs a DNS spider feature that crawls enumerated DNSrecords.
To run SubBrute:
cd optsubbrute
./subbrute.py cyberspacekittens.com
Github is a treasure trove of amazing data. There have been a number of penetration tests and Red Team assessments
where we were able to get passwords, API keys, old source code, internal hostnames/IPs, and more. These either led to
a direct compromise or assisted in another attack. What we see is that many developers either push code to the wrong
repo (sending it to their public repository instead of their company’s private repository), or accidentally push sensitive
material (like passwords) and then try to remove it. One good thing with Github is that it tracks every time code is
modified or deleted. That means if sensitive code at one time was pushed to a repository and that sensitive file is
deleted, it is still tracked in the code changes. As long as the repository is public, you will be able to view all of these
changes.
We can either use Github search to identify certain hostnames/organizational names or even just use simple Google
Dork search, for example:
site:github.com + "cyberspacekittens”.
Truffle Hog tool scans different commit histories and branches for high entropy keys, and prints them. This is great for
finding secrets, passwords, keys, and more. Let's see if we can find any secrets on cyberspacekittens' Github repository.
Lab:
cd opttrufflehog/truffleHog
python truffleHog.py https://github.com/cyberspacekittens/dnscat2
As we can see in the commit history, AWS keys and SSH keys
were removed from server/controller/csk.config, but if you look at
the current repo, you won't find this file:
https://github.com/cheetz/dnscat2/tree/master/server/controller.
cd optgit-all-secrets
docker run -it abhartiya/tools_gitallsecrets:v3 - repoURL=https://github.com/cyberspacekittens/dnscat2 -
token=[API Key] -output=results.txt
This will clone the repo and start scanning. You can even run through whole organizations in Github with the
-org flag. After the container finishes running, retrieve the container ID by typing:
docker ps -a
Once you have the container ID, get the results file from the container to the host by typing:
docker cp <container-id>:/data/results.txt .
Cloud is one area where we see a lot of companies improperly securing their environment. The most common issues we
generally see are:
Before we can start testing misconfigurations on different AWS buckets, we need to first identify them. We are going to
try a couple different tools to see what we can discover on our victim’s AWS infrastructure.
There are many tools that can perform S3 bucket enumeration for AWS. These tools generally take keywords or lists,
apply multiple permutations, and then try to identify different buckets.
For example, we can use a tool called Slurp (https://github.com/bbb31/slurp) to find information about our target
CyberSpaceKittens:
cd optslurp
./slurp domain -t cyberspacekittens.com
./slurp keyword -t cyberspacekittens
Another tool, Bucket Finder, will not only attempt to find different buckets, but also download all the content from those
buckets for analysis:
wget https://digi.ninja/files/bucket_finder_1.1.tar.bz2 -O bucket_finder_1.1.tar.bz2
cd optbucket_finder
./bucket_finder.rb --region us my_words --download
When analyzing AWS security, we need to review the controls around permissions on objects and buckets. Objects are
the individual files and buckets are logical units of storage. Both of these permissions can potentially be modified by any
user if provisioned incorrectly.
First, we can look at each object to see if these permissions are configured correctly:
aws s3api get-object-acl --bucket cyberspacekittens --key ignore.txt
We will see that the file is only writeable by a user named “secure”.
It is not open to everyone. If we did have write access, we could use the put-object in s3api to modify that file.
Next, we look to see if we can modify the buckets themselves. This can be accomplished with:
aws s3api get-bucket-acl --bucket cyberspacekittens
For example, you register an S3 Amazon Bucket with the name testlab.s3.amazonaws.com. You then have your
company’s subdomain testlab.company.com point to testlab.s3.amazonaws.com. A year later, you no longer need the
S3 bucket testlab.s3.amazonaws.com and deregister it, but forget the CNAME redirect for testlab.company.com.
Someone can now go to AWS and set up testlab.s3.amazon.com and have a valid S3 bucket on the victim’s domain.
One tool to check for vulnerable subdomains is called tkosubs. We can use this tool to check whether any of the
subdomains we have found pointing to a CMS provider (Heroku, Github, Shopify, Amazon S3, Amazon CloudFront, etc.)
can be taken over.
A huge part of any social engineering attack is to find email addresses and names of employees. We used Discover
Script in the previous chapters, which is great for collecting much of this data. I usually start with Discover scripts and
begin digging into the other tools. Every tool does things slightly differently and it is beneficial to use as many automated
processes as you can.
Once you get a small list of emails, it is good to understand their email format. Is it firstname.lastname
@cyberspacekitten.com or is it first initial.lastname @cyberspacekittens.com? Once you can figure out their format, we
can use tools like LinkedIn to find more employees and try to identify their email addresses.
SimplyEmail
We all know that spear phishing is still one of the more successful avenues of attack. If we don’t have any vulnerabilities
from the outside, attacking users is the next step. To build a good list of email addresses, we can use a tool like
SimplyEmail. The output of this tool will provide the email address format of the company and a list of valid users
Lab:
Find all email accounts for cnn.com
cd optSimplyEmail
./SimplyEmail.py -all -v -e cyberspacekittens.com
firefox cyberspacekittens.com<date_time>/Email_List.html
This may take a long time to run as it checks Bing, Yahoo, Google, Ask Search, PGP Repos, files, and much more. This
may also make your network look like a bot to search engines and may require captchas if you produce too many search
requests.
One of the best ways to get email accounts is to continually monitor and capture past breaches. I don't want to link
directly to the breaches files, but I will reference some of the ones that I have found useful: