Tuesday, March 18, 2014

Moving from Ntop to Ntopng

I used to start ntop this way:
screen -d -m ntopng -u ntop  -m my.subnets,myothersubnets -i eth2,eth3 -W 4443 -w 40000 -M &

But this failed as the redis cache was not running (but was installed as part of the dependencies):

18/Mar/2014 16:50:33 [Redis.cpp:43] ERROR: ntopng requires redis server to be up and running
18/Mar/2014 16:50:33 [Redis.cpp:44] ERROR: Please start it and try again or use -r
18/Mar/2014 16:50:33 [Redis.cpp:45] ERROR: to specify a redis server other than the default

I now need redis to be running. I modified  /etc/redis.conf to point to /opt/redisdb for its "dir" variable and changed the owner of the dir to redis as well as chmoding the directory to 700.
So, in redis.conf,

#dir /var/lib/redis/
dir /opt/redisdb/

You'll probably want to copy the selinux context info, if you're using selinux:

As you can see:
ls -laZ /var/run/redis/
drwxr-xr-x. redis root  system_u:object_r:var_run_t:s0   .
drwxr-xr-x. root  root  system_u:object_r:var_run_t:s0   ..
-rw-r--r--. redis redis unconfined_u:object_r:initrc_var_run_t:s0 redis.pid

chcon --reference /var/run/redis /opt/redisdb

 I started redis, which was listening to 6379 on localhost only (sudo service redis start)

Ntopng also likes to have a data directory, so I created /opt/ntopng:

sudo mkdir /opt/ntopng
sudo chown ntop /opt/ntopng
sudo chmod 700 /opt/ntopng

sudo screen -d -m ntopng -u ntop  -r localhost:6379 -m my.subnets,myothersubnets -i eth2,eth3 -W 4443 -w 40000 -M &

But now, it was listening on eth0 as it didn't like the ordering of arguments. I saw this error:

18/Mar/2014 16:45:58 [NetworkInterface.cpp:79] WARNING: No capture interface specified
18/Mar/2014 16:45:58 [NetworkInterface.cpp:1438] Available interfaces (-i ):
18/Mar/2014 16:45:58 [NetworkInterface.cpp:1459] 1. eth0 (eth0)
18/Mar/2014 16:45:58 [NetworkInterface.cpp:1459] 2. eth1 (eth1)
18/Mar/2014 16:45:58 [NetworkInterface.cpp:1459] 3. usbmon1 (USB bus number 1)
18/Mar/2014 16:45:58 [NetworkInterface.cpp:1459] 4. eth2 (eth2)
18/Mar/2014 16:45:58 [NetworkInterface.cpp:1459] 5. usbmon2 (USB bus number 2)
18/Mar/2014 16:45:58 [NetworkInterface.cpp:1459] 6. usbmon3 (USB bus number 3)
18/Mar/2014 16:45:58 [NetworkInterface.cpp:1459] 7. usbmon4 (USB bus number 4)
18/Mar/2014 16:45:58 [NetworkInterface.cpp:1459] 8. any (Pseudo-device that captures on all interfaces)
18/Mar/2014 16:45:58 [NetworkInterface.cpp:1459] 9. lo (lo)
18/Mar/2014 16:49:52 [PcapInterface.cpp:68] Reading packets from interface eth0...
18/Mar/2014 16:49:52 [Ntop.cpp:573] Registered interface eth0 [id: 0]

Not desirable, so that becomes (I removed -M (I'm not sure what replaces "don't merge interfaces") as well as changed -u to -U and added -n 1 to resolve only ip addresses listed in -m (local))

sudo screen -d -m ntopng -i eth1 -i eth2 -d /opt/ntopng -n 0 -W 4443 -w 40000 -m mysubnets -r localhost:6379 -U ntop &

One last thing, you now need to set the password for admin, either via a file, by the gui (after logging in as admin/admin) or by the redis-cli client. I chose the latter.

redis-cli SET user.admin.password `echo -n "mylousypassword" | md5sum | cut -f 1 -d " "`

You can see the users in the gui or here:

redis-cli KEYS user* 

You can a new user either through the gui like so:

 redis-cli SET user.mynewuser.password `echo -n "mylousypassword" | md5sum | cut -f 1 -d " "`

Wednesday, March 12, 2014

Splunk: Importing Oneshot Files with a Source Rename

I had to import some old gzipped log files - so I simply did a:

splunk add oneshot /var/log/mylogfile.1.gz

The problem was that the source type was /var/log/mylogfile.1.gz and not /var/log/mylogfile - breaking some of the field extractions I use. I found that I could not use wildcards in the source to capture the field extraction, and I couldn't use sourcetype as there were multiples.

1. I figured out the ranges of the data and deleted it using a search

2. I readded the data using a oneshot with a rename-source

splunk add oneshot /var/log/mylogfile.1.gz -rename-source /var/log/mylogfile

(repeat multiple times for each compressed logfile of the same name)

Problem solved - though this will go against your quota as the data is being re-indexed.