Recon is Everywhere

Today's situation is "Recon". If you are interested in Pentest, Bug bounty or Redteam work, I would recommend reading this article. I talked about some points of my interest when creating this content. I will try to look with from a perspective of the outside of the box. If you've got experience about this topic, please excuse me for my mistakes. :)

Rule -1 Recon is a Philosophy of Life

In large companies' penetration testing, large scopes are usually given for testing. At the same time, although there are large scopes, a short time is given for the test. But the results are expected to be very impressive. In general, we are all faced with this problem. We have a partial blindness due to having attention deficit during studies and as long as we don’t make an effort, it is not easy to get rid of blindness.

In a real life experiment, they explain something like this. While continuing our normal daily activities, a man in a gorilla costume walks among a group of students and half of the students didn't catch sight this guy. This experiment can give us some clues. Instead of looking, we can try to see something.

Figure - 1. The Invisible Gorilla

What can I do? Let's continue thinking about Recon concept. The relation between the Digital and the Physical approach can give us an insight, and the doors may open to us :)

Figure - 2. Physical-Recon

Especially social media activities are significant during the recon phase. Both the company staff and the accounts of the company must examine in detail. Using a simple script, you can get collect all the twitter mentions one at a time and get specific keywords. For example: broken links etc.

The link below has the story of "Belan". Primarily, I will take attention of heading "Use of Linkedin to Target Peripheral Systems". TL;DR

Rule -2 : Start -> Learn -> Re-Start

Also, it will not to be enough to make a Recon phase only once. Recon phase needs to realize continuously at all phases.

You should suspect even the "smallest" thing when you search for something. For example, if you constantly follow the company's subdomains, the possibility of "subdomain takeover" may increase.


First, we visit and start looking for google dorks like “site: -www” etc. but when you make specific google searches, the possibility of finding specific vulnerabilities will always be higher. You can search for "." instead of "*".

Alternatively, there is a project called that brings together popular searching sites. With a single search, I can search on sites like Google, Yahoo, Bing etc. This can really work.

And cross-site requests made to a different web application can give you an idea about sub-domains. For example, a domain called can be registered by X company. Micro-campaign sites of companies usually use different domains like this. It is possible to know them from within the websites, but some cross requests can give us an idea.

Such information included in cross-domain.xml files. Apart from that, I would recommend you to look at the sources of the JS files. Sometimes you can also get subdomains like dev, test etc.


I first looked at footer information on the site I found. For example, when I see an example of "© 2014 Company Name" in the footer of the website, I tried to search this on Google by typing "© 2015 Company Name". The use of copyright allowed me to find lots of [redacted] sites about this company.

Google Images

The official logos were very informative. The domain names were different, even though the company's Logo was the same on the sites that I discovered. When you visit the Google image searches, you can get the logos of the companies after you select the Small logo part. From there, you can get the domain information about the [Redacted] company.

Keeping up with the User interface of web applications is very important in the concept of recon phase. After some UI changes on the site, I test the same website again, assuming that it also has back-end changes for that website. Your attack surface could be increased if any parts changing.

You can use Visualping for this. is a service that let you know via email if there are minor or major changes in the current state of your web sites. I’m using this service for my BugBounty research.

Developers, Humans.txt

Apart from that, we can get some important information when we look for certain words on the github pages of the companies. We visited the "search field" area. "org: company" or else? When we crawl a web application, we get to robots.txt. We can also retrieve some information from the past by searching the for the robots.txt file. Let's brainstorm about the humans.txt file with same method. -> We are people, Not machines.

In the comment lines in the source code, we can get information about the developer, or we can access the site file human.txt. From here we can have a broad idea of the web application. For example, in the humans.txt, a developer can share own data(login.php, core.js, etc.) on their Github page. We can also search this files using and find out where else it used.

Installation Instructions

Some of the files such as pdf .doc etc that you get via Google may contain some internal information. For example, the word "Installation Guide" was life-saving for me. This document explains step-by-step installation instructions. I opened this documents that The admin panel interface and the admin panel's requests appeared. You can also get the same results as the SlideShare link of the company.

Reverse Analytics

Google Analytics asks you adding a JS file to your web pages for collecting your visitor data. In this JS file, there is a key that starts with "UA" to identify your website. With this number you can discover sites using the same “UA” information. There are online services available that can do "Reverse analytics". You can visit and online services for this.


Special Thanks to Zinnur Yeşilyurt and Mert Taşçı