Vulnerability scanning with Nikto

Nikto is a very popular and easy to use webserver assessment tool to find potential problems and vulnerabilities very quickly. This tutorial shows you how to scan webservers for vulnerabilities using Nikto in Kali Linux. Nikto comes standard as a tool with Kali Linux and should be your first choice when pen testing webservers and web applications. Nikto is scanning for 6700 potentially dangerous files/programs, checks for outdated versions of over 1250 servers, and version specific problems on over 270 servers according to the official Nikto website. You should know that Nikto is not designed as a stealthy tool and scans the target in the fastest way possible which makes the scanning process very obvious in the log files of an intrusion detection systems (IDS).

Nikto comes with the following features:

These are some of the major features in the current version:

  • SSL Support (Unix with OpenSSL or maybe Windows with ActiveState’s
  • Full HTTP proxy support
  • Checks for outdated server components
  • Save reports in plain text, XML, HTML, NBE or CSV
  • Template engine to easily customize reports
  • Scan multiple ports on a server, or multiple servers via input file (including nmap output)
  • LibWhisker’s IDS encoding techniques
  • Easily updated via command line
  • Identifies installed software via headers, favicons and files
  • Host authentication with Basic and NTLM
  • Subdomain guessing
  • Apache and cgiwrap username enumeration
  • Mutation techniques to “fish” for content on web servers
  • Scan tuning to include or exclude entire classes of vulnerability
  • Guess credentials for authorization realms (including many default id/pw combos)
  • Authorization guessing handles any directory, not just the root
  • Enhanced false positive reduction via multiple methods: headers,
    page content, and content hashing
  • Reports “unusual” headers seen
  • Interactive status, pause and changes to verbosity settings
  • Save full request/response for positive tests
  • Replay saved positive requests
  • Maximum execution time per target
  • Auto-pause at a specified time
  • Checks for common “parking” sites
  • Logging to Metasploit
  • Thorough documentation

Another nice feature in Nikto is the possibility to define the test using the -Tuning parameter. This will let you run only the tests you need which can safe you a lot of time:

0 – File Upload
1 – Interesting File / Seen in logs
2 – Misconfiguration / Default File
3 – Information Disclosure
4 – Injection (XSS/Script/HTML)
5 – Remote File Retrieval – Inside Web Root
6 – Denial of Service
7 – Remote File Retrieval – Server Wide
8 – Command Execution / Remote Shell
9 – SQL Injection

a – Authentication Bypass
b – Software Identification
c – Remote Source Inclusion
x – Reverse Tuning Options (i.e., include all except specified)

Nikto has it’s own updating mechanism. We encourage you to check for updates before using Nikto. Nikto can be updated using the following command:

nikto -update

Scanning webservers with Nikto

Let’s start Nikto to scan for interesting files with option 1 using the following command:
nikto -host [hostname or IP]-Tuning 1

Nikto webserver scanner kali

Nikto will now display the Apache, OpenSSL and PHP version of the targeted webserver. Also it will give you an overview of possible vulnerabilities including the Open Source Vulnerabilities Database (OSVDB) reference. When you search the OSVDB website for the reference code it will explain the possible vulnerability in more detail. The OSVDB project currently covers more than 120,980 vulnerabilities, spanning 198,973 products from 4,735 researchers, over 113 years.

Running all Nikto scans against a host

To run all scans against a particular host you can use the following command:

nikto -host [hostname or IP]

Running all scans will take a lot of time to complete.

Running Nikto against multiple hosts

Nikto offers several options to test multiple hosts:

  • By using a valid hosts file containing one host per line
  • Piping Nmap output to Nikto.

A valid host file is a text file containing the hosts, you have to use one line for each host in order to make it valid for Nikto. Instead of using the hostname as an argument for the -h option you should use the filepath to the valid hosts file.

Another solution is to pipe the Nmap output to Nikto. Nmap will output the valid hosts to Nikto and Nikto will run the selected scans against these hosts. The following command will run a Nmap scan on host – using a grepable output which is defined by the -oG- flag:

nmap -p80 -oG – | nikto -h –

Please note that you should use a dash (-) for Nikto’s host option to use the hosts supplied by Nmap.

In your nikto scan options, use tack capital F htm to signify the output format as html.

Below is an example command:

nikto -h -Display V -F htm -output niktoscan.html

(H/T to

Eliminating too many TIME_WAIT sockets

Some time in your life you’ll run across an Apache server that always has tons of TIME_WAIT connections just seeming to hang out. While these don’t take up as many resources as an ESTABLISHED connection, why keep them around so long? This short article will show you how to identify how many you have, and how to tell your server to reduce them, reuse and recycle them (see, recycling IS a good thing).

First, SSH into your server and become root.

Next, let’s see how many TIME_WAITs you have hanging out:

netstat -nat | awk '{print $6}' | sort | uniq -c | sort -n

You should see something like this:

      1 established)
      1 Foreign
      3 FIN_WAIT2
      5 LAST_ACK
      6 CLOSING
      9 SYN_RECV
     22 FIN_WAIT1
     26 LISTEN
    466 TIME_WAIT

So – let’s get that number smaller.

See what your current values are in these files by catting them to the screen:

cat /proc/sys/net/ipv4/tcp_fin_timeout
cat /proc/sys/net/ipv4/tcp_tw_recycle
cat /proc/sys/net/ipv4/tcp_tw_reuse

If you have default settings, you’ll probably see values of 60, 0 and 0. Let’s change those values to 30, 1, 1.

echo 30 > /proc/sys/net/ipv4/tcp_fin_timeout
echo 1 > /proc/sys/net/ipv4/tcp_tw_recycle
echo 1 > /proc/sys/net/ipv4/tcp_tw_reuse

Now, let’s make the change persistent by adding them to the sysctl.conf file. First however, let’s make sure there aren’t any entries in there yet for these settings. cat the file and grep for the changes we’re about to make:

cat /etc/sysctl.conf | grep "net.ipv4.tcp_fin_timeout"
cat /etc/sysctl.conf | grep "net.ipv4.tcp_tw_recycle"
cat /etc/sysctl.conf | grep "net.ipv4.tcp_tw_reuse"

Make notes of what your settings are if you had any results.

Now, edit the /etc/sysctl.conf with your favorite editor and add these lines to the end of it (or edit the values you have in yours if they exist already):

# Decrease TIME_WAIT seconds
net.ipv4.tcp_fin_timeout = 30

# Recycle and Reuse TIME_WAIT sockets faster
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_tw_reuse = 1

Now, let’s rerun that command from before and see where your TIME_WAITs are at:

netstat -nat | awk '{print $6}' | sort | uniq -c | sort -n

(You may need to wait at least a minute or so, depending on what your old values were, to see a change here.)


Related Posts

Share This

How to set up WebDAV hosting on Apache – TechRepublic

Original: How to set up WebDAV hosting on Apache – TechRepublic.

WebDAV (Web-based Distributed Authoring and Versioning) is a way to share files over HTTP, much like you would use Samba or NFS. It has more limitations, and less speed, than filesystems like Samba or NFS, but with the proliferation of web servers and the ability to reach websites from multiple clients in various locations, WebDAV certainly has its appeal. Unlike Samba or NFS, which are best suited for local area networks, you can use an HTTP server anywhere in the world and likewise access it from anywhere.

WebDAV support is also baked right into most modern operating systems, making it extremely easy to access as a client. Setting it up on the server, however, may be more of a challenge. Certainly setting it up correctly can be.

Using Apache on Red Hat Enterprise Linux 5 (or CentOS 5) as an example, let’s look at setting up a WebDAV server. First and foremost, you will need mod_dav and mod_dav_fs support, which can be found in the httpd package; if you have Apache installed, you will have support for WebDAV already available (other distributions may package WebDAV support modules separately, such as apache-mod_dav). The first step is to create /etc/httpd/conf.d/webdav.conf which will be where we configure WebDAV. The reason we are putting our configuration file there is due to this gem in /etc/httpd/conf/httpd.conf:

Include conf.d/*.conf

This tells Apache to automatically pick up all configuration files (*.conf) in /etc/httpd/conf.d/. The contents of /etc/httpd/conf.d/webdav.conf will look similar to this:

<IfModule mod_dav.c>
    LimitXMLRequestBody 131072
    DavLockDB /var/dav/DavLock
    Alias /dav "/srv/www/dav"
    <Directory /srv/www/dav>
        Dav On
        Options +Indexes
        IndexOptions FancyIndexing
        AddDefaultCharset UTF-8
        AuthType Basic
        AuthName "WebDAV"
        AuthUserFile /etc/httpd/conf/dav.passwd
        Require valid-user

This sets up the required WebDAV settings necessary to make it work properly. Here we have defined a number of things; one that is important to note is the location of the DavLockDB file (this must be writable by the user running Apache — usually apache or nobody). The directory storing the lock file needs to be writable, so create a new directory specifically for this purpose:

# mkdir -p /var/dav
# chown nobody:nobody /var/dav

You will also want to ensure that /srv/www/dav is writable by the user running Apache as well:

# mkdir -p /srv/www/dav
# chown nobody:nobody /srv/www/dav
# chmod 755 /srv/www/dav

Finally, you need to create the password file for authentication. In the above example the password file was specified as /etc/httpd/conf/dav.passwd, so use htpasswd to create it:

# htpasswd -c /etc/httpd/conf/dav.passwd [user]

You will be prompted for [user]’s password and then htpasswd will create the file. At this point you can restart Apache:

# service httpd restart

You can now point a web browser to and it should prompt you for a login. You won’t be able to do anything special in the web browser, but you can use another WebDAV client to try uploading and downloading files, such as cadaver:

# cadaver
Authentication required for Private on server `':
Username: user
dav:/dav/> ls
Listing collection `/dav/': succeeded.
Coll:   omnifocus                              0  Aug  8 14:30
        somefile.txt                         115  Jul 17 15:03

For more security, wrap WebDAV up in SSL by adding it to an appropriate SSL-based virtual host. This will encrypt your password and data-in-transit.

This should also work with most other Linux distributions using Apache, possibly changing some paths to configuration files or package names. All in all, setting up WebDAV doesn’t have to be difficult, but all of these steps are required, otherwise some WebDAV clients will fail with inexplicably weird errors. This also provides a quick and easy way to store files in a remote location, securely, with the ability to obtain them from anywhere.

Get the PDF version here.


Related Posts

Share This

Negotiation: Discovered File(s) Matching Request: None Could Be Negotiated

Original: Negotiation: Discovered File(s) Matching Request: None Could Be Negotiated.

Posted June 24, 2011 at 9:30 AM by Ben Nadel

Yesterday, I lost at least two hours trying to figure out why my local copy of a website was throwing 404 (File Not Found) errors. We had just implemented some URL rewriting on the production site and things appeared to be working fine. On my local computer, however, the URL rewriting would only work for some URLs. After a lot of Googling, a little bit of a meltdown, and walk around the block to clear my head, I finally figured out what was going wrong: MultiViews.

For this particular website, we were using Apache to create a more resource-oriented URL scheme. So, where we used to have URLs that looked like this:


… we were going to be moving to a URL scheme that looked more like this:


For the majority of URLs in the application, this was working fine. But for a few URLs, this came back as a 404 File Not Found error. When I looked in the Apache error logs, I kept seeing log items like this:

[Thu Jun 23 16:55:06 2011] [error] [client] Negotiation: discovered file(s) matching request: /Sites/ (None could be negotiated).

My experience with the Apache HTTP server is somewhat limited; as such, this error didn’t really mean anything coherent to me at the time. And, much of my Googling wasn’t really yielding anything too valuable. Finally, however, I did come across a threaded discussion that mentioned “MultiViews”. On the Apache website, the effect of Multiviews, as listed under Content Negotiation, is as follows:

If the server receives a request for /some/dir/foo, if /some/dir has MultiViews enabled, and /some/dir/foo does not exist, then the server reads the directory looking for files named foo.*, and effectively fakes up a type map which names all those files, assigning them the same media types and content-encodings it would have if the client had asked for one of them by name. It then chooses the best match to the client’s requirements.

This is exactly what was happening! When the user requested the non-existent URL:


… Apache was finding this physical file on a per-directory basis:


… and then using Content Negotiation to try and figure out which file it should served up based on the request headers. And, since I didn’t have any file types configured for content negotiation, Apache didn’t know how to respond and just returned a 404.

When I checked in my Virtual Host configuration, sure enough, MultiViews was enabled:

 Launch code in new window » Download code as text file »

  • Options Indexes FollowSymLinks MultiViews

The moment I removed the MultiViews option:

 Launch code in new window » Download code as text file »

  • Options Indexes FollowSymLinks

… everything started working fine.

This goes to show you how bad it is to simply enable settings when you are not sure what they do. I’ve grown to love my Apache server; but, clearly, there is so much more that I need to learn about it. When it works, it works; but, when it doesn’t, I have no idea what’s going on.



Related Posts

Share This