How to install Logstash on Windows Server 2012 with Kibana in IIS.

This post is currently outdated, please have a look here to see a up to date version:
https://community.ulyaoth.net/threads/how-to-install-logstash-on-a-windows-server-with-kibana-in-iis.17/
This guide will be updated as soon as possible.

In this guide I will show that it is also possible to run Logstash on a Windows Server 2012 machine and use IIS as web server. This guide probably requires some improvements and optimizations but it should give you a good example of how to set everything up.

Please, be aware that you will probably have to configure Kibana in a different way then I did to make everything look shiny, and you will probably have to use a different kind of logstash configuration to make things show as you would like. I am also aware that Logstash provides all-in-one pages that have ElasticSearch and Kibana built in, however I still feel setting things up separately is more appropriate.

The config below is just meant to be an example to show that everything works just as fine on Windows as it does on Linux.

If you are interested in Linux then please have a look at my other guide at:

Now lets start with the guide!

Step 1: Download Logstash, Kibana and ElasticSearch.
Simpely go to “http://www.elasticsearch.org/overview/elkdownloads/

Logstash: https://download.elasticsearch.org/logstash/logstash/logstash-1.4.2.zip
Kibana: https://download.elasticsearch.org/kibana/kibana/kibana-3.1.0.zip
Elasticsearch: https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-1.2.1.zip

Step 2: Extract all packages
I created myself a folder called “basefarm” in “c:\basefarm\” and extracted all folders there to make it easier.

So, for me it looks like this now:
c:\basefarm\elasticsearch
c:\basefarm\kibana
c:\basefarm\logstash

Step 3: Download the JDK version of Java and install it.
Go to the Java website: http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html
Accept the license and then download: “Windows x64 (jdk-8u5-windows-x64.exe)” package.
Now install it!

Step 4: Add the JAVA_HOME variable to the server
Now right click on “This PC” and choose “Properties” on the right bottom site next to your computer and full computer name click on Change settings.
On the window that opens go to the Advanced tab and click on “Environment Variables”.
at the bottom box called “System Variables” click on “new” and add the following:
Variable Name: JAVA_HOME
Variable value: C:\Program Files\Java\jdk1.8.0_05

It should look like this:

Step 5: Download the required configuration files
Logstash.conf: https://github.com/sbagmeijer/ulyaoth/blob/master/guides/logstash/windows/logstash.conf

Place this file in:
C:\basefarm\logstash\bin

ulyaoth.json:
https://raw.githubusercontent.com/sbagmeijer/ulyaoth/master/guides/logstash/kibana/dashboard/ulyaoth.json

Place this file in:
C:\basefarm\kibana\app\dashboards

rename “ulyaoth.json” to “basefarm.json” so you end up with “C:\basefarm\kibana\app\dashboards\basefarm.json”.

Step 6: Configure Kibana & Logstash
Open the file: C:\basefarm\kibana\config.js

and change the following line:
default_route : ‘/dashboard/file/default.json’,

to:
default_route : ‘/dashboard/file/basefarm.json’,

Now open the file: C:\basefarm\kibana\app\dashboards\basefarm.json

and change the following line:
“title”: “Ulyaoth: Logstash Search”,

to:
“title”: “Basefarm: Logstash Search”,

Step 7: Install IIS
Go to “Server Manager” and choose “Add Roles and Features Wizard” from the list here choose “Web Server (IIS)” now go further and let it install.

Step 8: Open IIS Manager and stop the “Default Web Site”
Just press the stop button like you see below in the picture:

Step 9: Create a new website for Kibana as shown below
Right click on “sites” in the left part of IIS Manager and click “Add Website”.

Fill it in something like this:

It should automatically start.

Step 10: Start Elasticsearch and put it on auto-start
Open a console and go to “c:\basefarm\elasticsearch\bin\”
now type the following command:
service install

You should see something like:

Now type the following:
service manager

You should see the elasticsearch service manager:

You have to change on the tab the “Startup type” from Manual to Automatic and then press “Apply”. This should make Elasticsearch start automatically on server boot.

This window contains some more options such as how much memory Elasticsearch will use. You can find this under the “Java” tab. I would suggest to make this fitfor your server if you have a server that will handle a huge amount of logs. I would increase the “Maximum Memory Pool: 1024” at least to a higher amount.

Before you close the window make sure to press “Start” so it actually will run right now 🙂

This is everything to start ElasticSearch automatically on boot. To test that it is working, open a browser and go to this url: http://127.0.0.1:9200/

If you see a json string something like what you see below in the picture then it means it is running:

Step 11: Start Logstash & Autostart it
For this step we need another small program to create a proper Windows service, so please go ahead and download “NSSM” (the Non-Sucking Service Manager) from: http://nssm.cc/
http://nssm.cc/release/nssm-2.23.zip

Once you have the zip file simply unzip it and copy the file from the unzipped folder you now have: “nssm-2.23\win64” (nssm.exe) to “C:\basefarm\logstash\bin” so it should result in you having “C:\basefarm\logstash\bin\nssm.exe”.

I know you technically do not have to copy this file but just to keep things clean and to have this available for any future use you never know. 🙂

Now open a Command Prompt and type:
cd C:\basefarm\logstash\bin

And then type the following:
nssm install logstash

You will now see a GUI to create a server fill in the following:
Path: C:\basefarm\logstash\bin\logstash.bat
Startup directory: C:\basefarm\logstash\bin
Arguments: agent -f C:/basefarm/logstash/bin/logstash.conf

It should look like this:

If all looks okay double check on the “Details” tab that “Startup Type” is set to “Automatic” and then press “Install service”. This should be all for Logstash to automatically start on server boot.

If you wish to adjust the memory Logstash does use then simpely open the file “C:\basefarm\logstash\bin\logstash.bat” and the change the following two lines accordingly to the amount of memory you wish it to use:
[code]
set LS_MIN_MEM=256m
set LS_MAX_MEM=1g
[/code]

Step 12: Edit your host file (optional)
This step I only do because I run everything on a test server with no internet connection.

open: C:\Windows\System32\drivers\etc\hosts

Now add:
127.0.0.1 loghost.basefarm.com

And save the file.

Now reboot your server so you can test that everything is automatically coming online.

This is all you should have to do once the server is back online you have logstash up and running so just go to:
http://loghost.basefarm.com/

And you should see:

As you can see, your Kibana IIS logs are shipped now to the Logstash instance.

Just remember, if you run this website over the internet you probably need to make sure port 9200 is accessible but I would restrict it to internal use only so Kibana can reach it but not the outside world.

If you want to ship logs from another server to your loghost server I would suggest to have a look into a program called “nxlog” (http://nxlog-ce.sourceforge.net/) this is a fairly simple way of shipping logs to Lgstash and works perfect on Wndows.

If you have any suggestions to improve this guide then please feel free to or update the configs on GitHub or to provide me the information so I can update the guide and help others!

I also would like to thank “Milo Bofacher” for pointing to “nssm” and “nxlog”!

How to install MongoDB on Windows Server 2012 with a replication set.

This post is currently outdated, please have a look here to see a up to date version:
https://community.ulyaoth.net/threads/how-to-install-mongodb-on-windows-with-a-replication-set.18/
This guide will be updated as soon as possible.

In this guide I will show some simple steps of how to set up a MongoDB installation in a replication on Windows Server 2012.

For this setup, I used the following three servers that have Windows 2012 Standard edition installed.

bf-mongodb01: 192.168.1.42 / 4gb ram / 2 cpu
bf-mongodb02: 192.168.1.43 / 4gb ram / 2 cpu
bf-mongodb03: 192.168.1.44 / 4gb ram / 2 cpu

Workgroup: bf-mongodb

They all have to be in the same workgroup or in the same domain. If they are not then you have to add all the servers in your hosts file so mongodb knows how to connect to them.

Also, for this example I installed everything as “Administrator”. Depending on how you are going to use it I would suggest to make a non admin user to run all this.

So lets start!

Step 1: Download MongoDB (on all three servers)
MongoDB provides Windows installer packages, so simply download their msi file from their website http://www.mongodb.org/.
https://fastdl.mongodb.org/win32/mongodb-win32-x86_64-2008plus-2.6.3-signed.msi

Even if it does say ‘Windows 2008’ it does work perfect on Windows 2012!

Step 2: Install MongoDB
Just follow the pictures below to install MongoDB. You have to do this on all three servers.


Just press ‘next’.


Read the license and then tick the box you accept the license and press ‘next’.


Press you wish to install the “Complete” version.


Just press ‘Install’ to start the installation.


You should now have finished the installation so simpely press ‘Finish’.

Remember, you have to do this on all three of your servers.

Step 3: Create a database and log directory (on all three servers)
Create the following directories:
C:\basefarm\mongodb\data\db
C:\basefarm\mongodb\data\log

Step 4: Fix the Windows firewall (on all three servers).
Go to your “Control Panel” and then click on “Network and Internet”. Once there, click on “Network and Sharing Center” and then at the right side at the bottom click on “Windows Firewall”.

You should now see your Windows firewall like this:

On this window click on “Allow an app or feature trough Windows Firewall” your window will change and click on this window on “Allow another app..” and fill everything in as shown below.

If everything looks as above press on “OK” to close the firewall configuration window.

Step 5: Create a MongoDB config file (on all three servers)
Open a notepad and add the following information:
logpath=C:\basefarm\mongodb\data\log\mongod.log
dbpath=C:\basefarm\mongodb\data\db
bind_ip=0.0.0.0
replSet=bf

Once added, save the file as “mongod.cfg” in the directory:
“C:\Program Files\MongoDB 2.6 Standard\”
You should end up with
“C:\Program Files\MongoDB 2.6 Standard\mongod.cfg”

Step 6: Create a service that will automatically start MongoDB (on all three servers)
Open a Command Prompt and type the following:
sc.exe create MongoDB binPath= "\"C:\Program Files\MongoDB 2.6 Standard\bin\mongod.exe\" --service --config=\"C:\Program Files\MongoDB 2.6 Standard\mongod.cfg\"" DisplayName= "MongoDB 2.6 Standard" start= "auto"

This should create a service for MongoDB. If you did it correct, it should look like this:

Step 7: Start MongoDB (on all three servers)
Just restart the server and MongoDB should automatically start, so it is a good test that the previous commands worked.

Step 8: Go into the MongoDB shell (only on server one)
Go to “C:\Program Files\MongoDB 2.6 Standard\bin” and double click on “mongo”. A terminal window should open that looks like this:

Step 9: Create the replica set in the mongo shell. (only on server one)
While being in the mongo shell type the following commands:
rs.initiate()
rs.add("bf-mongodb02:27017")
rs.add("bf-mongodb03:27017")
cfg = rs.conf()
cfg.members[0].priority = 100
cfg.members[1].priority = 50
cfg.members[2].priority = 50
rs.reconfig(cfg)

Step 10: Test if your configuration is working. (only on server one)
in the mongo shell still type:
rs.status()

You should see something like this if everything is correct:

Congratulations, you now have successfully installed MongoDB on Windows and you have set it up in a replication :)! Now, let’s test that the replication works by creating a database on the master (mongodb01).

Step 11: Create a test collection with some data on the master (only on server one)
In the mongo shell type the following:
use basefarm
bf = { name : "basefarmblog" }
db.Data.insert( bf )
show dbs
show collections
db.Data.find()

You should see something like this:

As you can see “show dbs” did show you have a database Basefarm, “show collections” shows you have the collection Data and “db.Data.find()” shows that the Data collections contains the information “basefarmblog”.

If everything did work as intended, all this should have been replicated to your servers mongodb02 and mongodb03, so let’s test it!

Step 12: Check if the slaves have data (on server two or three)
Go to your server two or three and open a mongo shell by double clicking on the “mongo” file and run the following commands:
show dbs
rs.slaveOk()
use basefarm
show collections
db.Data.find()

If everything worked you should see the following:

The command “rs.slaveOk()” I used because this will you allow to read from the slave. By default this is not enabled.

Well, this was it. Everything worked and you can now use your MongoDB replica set for anything you like!

As always, if you have any improvements or see any mistakes, please let me know. I am always open to hear your ideas or to learn from you!

Resizing mgmt_tablespace in Grid Control 12c and reclaiming space

We noticed the MGMT_TABLESPACE in our 12c Grid Control production database was very big. Almost 380gb of data was stored in this tablespace.

I did not think anything was wrong because of all the targets registered in this Grid Control instance and the frequent usage of this Grid Control.

Until someone looked into this and found out it was a bug in 12c…

Note 1502370.1 describes this bug and also the solution for this. I had to install a patch and truncate the em_job_metrics table. After applying the patch and truncating the table, there was only around 10gb of data left in the MGMT_TABLESPACE.

Because I already experienced something like this five years ago in a 10g OEM database , I knew that it is not easy to reclaim the space in this tablespace. And we had a lot to reclaim, only 10gb data in this tablespace of 380gb…

I decided to check my Oracle support, maybe someone else already had the same problem. I found bug 17461366, making it impossible to reorg the mgmt_tablespace because of aq objects. Same problem I faced five years ago. Because I really wanted to reclaim the free space (370gb), I decided to follow my own notes. Although these steps ware performed on a 10g OEM environment and now I was working with a 12c Grid Control environment.

Just like 5 years ago, I could think of two ways to reclaim the space:

1. export mgmt_tablespace, drop the tablespace and import it again

2. export sysman, drop repository and, run scripts from the oms_home and import sysman again

Export MGMT_TABLESPACE

Unfortunately, dropping the mgmt_tablespace was a mission impossible. The export succeeded without errors, but I was not able to drop the tablespace. After dropping some related objects that were showing in the errors I received, I decided this was not going to work. I restored the database and executed option 2.

Export Sysman user(s)

I decided to follow note 388090.1. This note describes a platform migration for a 10g Grid Control environment, but I could not find any document about a 12c Grid Control environment.

Just to be sure, I also exported (expdp) the other sysman users (sysman_apm, sysman_mds, sysman_opss and sysman). Also set job_queue_processes to zero

The next step was to drop the current repository using the repmanger utility ($OMS_HOME/sysman/admin/emdrep/bin)

The repmanager also did not drop the mgmt_tablespace, I had to drop the tablespace myself. But after deleting the repository I was able to drop the tablespace. The drop repository did drop all the sysman users and other tablespaces

According to the note, before the impdp, I had to run some scripts from the $OMS_HOME/sysman/admin/emdrep/sql/core/latest/admin directory.

These script should create the sysman users and tablespaces. The following sql’s were executed :

– admin_create_tablespaces.sql

– admin_create_repos_user.sql

– admin_pre_import.sql

– admin_sys_procs.sql

– admin_profiles.sql

– admin_grants_repos_user.sql

– admin_grants_view_user.sql

Now, you should be ready to inport the sysman scheme again (impdp). But the import showed a lot of errors. Mean reason, the sysman_ro user was not available and the mgmt_ad4j_ts tablespace was not created. Again I decided to drop the sysman user again and start over again. But this time, instead of running the scripts before the import I created a whole new repository using Repmanager.

The new repository did have the sysman_ro user and also the missing tablespace.

At this time I started the import again, using the ‘table_exist_action=replace’ option. The repository creation already created the sysman objects, so I wanted the impdp to replace the already created tables with the tables from the dump. I noticed one table creation error in the logfile, the em_job_type_creds_info table was not created by impdp. I had to create this one after the impdp.

Also the other sysman users were not created by the repmanager. I had to create and import these users (sysman_.mds, sysman_apm and sysman_opss).

After the import I returned to the note again, to check for post import steps.

Again in $OMS_HOME/sysman/admin/emdrep/sql/core/latest/admin

– admin_recompile_invalid.sql

– admin_create_synonyms.sql

– admin_post_import.sql

Reset the job_queue_processes to its original  value and submit the EM dbms jobs

– admin_submit_dbms_jobs.sql

After compiling, there were still problems with aq objects.

I dropped the aq objects (exec dbms_aqadm.drop_queue_table) and created (dbms_aqadm.create_queue_table) them again. This solved the problems and also cleared the invalid objects

At this point, there were no invalid objects anymore, so the oms could be started again to see what happens.

Oms started oke and I was able to login 12c Grid Control.

With 12c Grid Control running, I noticed the oms was bouncing every 12 minutes by itself.

I checked the emctl.msg file in /gc_inst/em/EMGC_OMS1/sysman/log for errors and found the following error:

HealthMonitor Nov 15, 2013 12:57:48 PM PbsAdminMsgListener error: PbsAdminMsgListener thread timed out.
Critical error err=3 detected in module PbsAdminMsgListener
OMS will be restarted. A full thread dump will be generated in the log file

It seemed the em_cntr_queue was missing? I checked and found out some other queues were also missing (not in the dump?) I recreated the missing queues (check the queues in an OTA Grid environment) and that solved the restart error. Now oms was not restarting by itself anymore.

Reclaimable free space

After these steps (took me two days…) I reclaimed 370gb. The mgmt_tablespace is now 10gb (instead of the 380gb before). The total size of the entire database was shrinked from more than 400gb to 22gb!!

How to install Logstash with Kibana interface on RHEL

This post is currently outdated, please have a look here to see a up to date version:
https://community.ulyaoth.net/threads/how-to-install-logstash-kibana-on-fedora-using-rsyslog-as-shipper.11/
This guide will be updated as soon as possible.

In this guide I will provide an example of how to set up a Logstash server with a Kibana interface that does get the logs from rsyslog. While there are multiple other ways to get logs into Logstash I will focus in this guide on rsyslog only.

I am aware that in the new Logstash rpm everything such as Kibana is merged into one package, But I feel personally it is better to install things separate as this gives you the possibility to update certain parts when you want without having to wait for a new rpms.

If you are going to use this in a production environment then please make sure to check the security implications of going the rsyslog way as you would need to open a port. So unless you are in an internal network everyone will be able to ship logs to your Logstash server.

So what is Logstash!?:
“Logstash is a tool for managing events and logs. You can use it to collect logs, parse them, and store them for later use (like, for searching). Speaking of searching, Logstash comes with a web interface for searching and drilling into all of your logs.”

There are a lot of examples on the official Logstash so I definitely recommend having a look there!
Their website: http://www.logstash.net

Now let’s start, for this guide I will be using the following programs:
RHEL (I am using RHEL 6 for this guide)
Logstash
rsyslog
ElasticSearch
Nginx
Kibana

Step 1: Install Logstash
$ sudo yum localinstall https://download.elasticsearch.org/logstash/logstash/packages/centos/logstash-1.4.1-1_bd507eb.noarch.rpm

Since the RHEL 6 Nginx version is pretty old I will install a more upstream version from the Nginx website.

Step 2: Install the Nginx yum repository
$ sudo yum localinstall http://nginx.org/packages/rhel/6/noarch/RPMS/nginx-release-rhel-6-0.el6.ngx.noarch.rpm

Step 3: Add the official ElasticSearch repository for Version 1.1.x
$ sudo vi /etc/yum.repos.d/elasticsearch.repo

Step 4: Add the following content to this file
[elasticsearch-1.1]
name=Elasticsearch repository for 1.1.x packages
baseurl=http://packages.elasticsearch.org/elasticsearch/1.1/centos
gpgcheck=1
gpgkey=http://packages.elasticsearch.org/GPG-KEY-elasticsearch
enabled=1

If you have the supplementary repository of RHEL6 enabled please use 5a which I recommend as
it will use the official oracle jave, if you do not use this please use 5b.
Step 5a: Install all required packages (with the supplementary repository)
$ sudo yum install java-1.7.0-oracle elasticsearch nginx rsyslog tar wget vim policycoreutils-python zip

Step 5b: Install all required packages (without supplementary repository)
$ sudo yum install java-1.7.0-openjdk elasticsearch nginx rsyslog tar wget vim policycoreutils-python zip

Step 6: Go to the Logstash config directory
$ cd /etc/logstash/conf.d

Step 7: Download the following Logstash config file
$ sudo wget http://trash.ulyaoth.net/trash/logstash/conf/logstash.conf

Step 9: Change the ownership of the Logstash config file
$ sudo chown logstash:logstash logstash.conf

Step 10: Create the following directories:

$ sudo mkdir -p /var/log/nginx/kibana
$ sudo mkdir -p /usr/share/nginx/kibana/public
$ sudo mkdir -p /etc/nginx/sites-available
$ sudo mkdir -p /etc/nginx/sites-enabled

Step 11: Go to the nginx directory
$ cd /etc/nginx/

Step 12: Delete the current nginx.conf
$ sudo rm -rf nginx.conf

Step 13: wget a new nginx.conf
$ sudo wget http://trash.ulyaoth.net/trash/nginx/conf/nginx.conf

Step 14: Open the new nginx.conf
$ sudo vim /etc/nginx/nginx.conf

Step 15: Change the following line to fit your cpu amount
worker_processes 1;
worker_connections 1024;

I have two virtual CPUs so I use:
worker_processes 2;
worker_connections 2048;

I feel personally there is not much point going above 4 worker processes however opinions are split about this.
Just save the file after you added your changes.

Step 16: Go to the nginx vhost directory
$ cd /etc/nginx/sites-available/

In this up to date guide I will use the official kibana vhost that kibana provides you can see it here:
https://github.com/elasticsearch/kibana/blob/master/sample/nginx.conf

However I added a few small changes, so you can use the above official one or use my updated one see what fits you, I will
be using mine that is based on the official one for this guide.

Step 17: wget the kibana vhost file
$ sudo wget http://trash.ulyaoth.net/trash/nginx/vhost/kibana

As you can see in my vhost file I disabled the password protected endpoints you can enable them by removing the # and by creating a password file. (not part of this guide)

Step 18: Open the kibana vhost file
$ sudo vim /etc/nginx/sites-available/kibana

Step 19: Change the site name
Simply change the “logstash.ulyaoth.net” to whatever your Logstash url will be and save the file.

Step 20: Symbolic link the vhost file so nginx will load it
$ sudo ln -s /etc/nginx/sites-available/kibana /etc/nginx/sites-enabled/kibana

Step 21: go to the kibana folder
$ cd /usr/share/nginx/kibana/public

Step 22: Download the latest Kibana version
$ sudo wget https://download.elasticsearch.org/kibana/kibana/kibana-latest.tar.gz

Or if you are like me you can get a newer version directly from their GitHub. (can be experimental)
$ sudo wget https://github.com/elasticsearch/kibana/archive/master.zip

Step 23: Untar Kibana and fix directory stucture

$ sudo tar xzfv kibana-latest.tar.gz
$ sudo mv kibana-latest/* .
$ sudo rm -rf kibana-latest.tar.gz
$ sudo rm -rf kibana-latest

If you did download the “master.zip” file you will need to do the following instead:

$ sudo unzip master
$ sudo mv kibana-master/src/* .
$ sudo rm -rf master
$ sudo rm -rf kibana-master

Step 24: Open the config.js file
$ sudo vi config.js

Step 25: Change the file slightly
Change the following line:
default_route : '/dashboard/file/default.json',

To the following:
default_route : '/dashboard/file/ulyaoth.json',

If you want to use another dashboard simpely change the “ulyaoth.json” part.

I changed it myself to “basefarm.json”.

Step 26: Go to the dashboard directory
$ cd /usr/share/nginx/kibana/public/app/dashboards

Step 27: Download the following version of logstash.json
$ sudo wget http://trash.ulyaoth.net/trash/kibana/dashboard/ulyaoth.json

My version is identical to the official one except I changed the graph size and how the logs show.

Since I changed the name of the file in config.js to basefarm.json I would need to rename the file.

Step 28: Rename the file
$ sudo mv ulyaoth.json basefarm.json

Step 29: open basefarm.json (or whatever name you used)
$ sudo vi basefarm.json

Step 30: Change the site name
Change the following line:
“title”: “Ulyaoth: Logstash Search”,

Change the bit “Ulyaoth: Logstash Search”,to whatever you would like to name your Kibana interface site and save the file.
For me it is currently called “Basefarm: Logstash Search”.

Step 31: Create a nologin user called kibana
$ sudo useradd -s /sbin/nologin kibana

Step 32: Chown the web dir to kibana:nginx
$ sudo chown -R kibana:nginx /usr/share/nginx/kibana/

Step 33: Start Logstash, ElasticSearch and Nginx

$ sudo service elasticsearch start
$ sudo service logstash start
$ sudo service nginx start

If you now go to your website for example for me “http://logstash.basefarm.net” you will see something like this:

Logstash is a product that is always in development so the screenshot above is outdated probably by now as they keep changing the interface, I would advise to keep my version of the interface as a grain of salt and experiment yourself with how you want to look.

You can do so by playing around with the dashboard files, everyone has his or her own taste so I decided to not make this part of my guide but just focus on how to install it.

Of course there is no data so let us move forward and do the rsyslog configuration that will ship the specific logs to your Logstash server.

Step 34: Create the rsyslog logstash file
$ sudo vi /etc/rsyslog.d/logstash.conf

Step 35: add the logs you want to ship (nginx example)

$ModLoad imfile

$InputFileName /var/log/nginx/kibana/error.log
$InputFileTag kibana-nginx-errorlog:
$InputFileStateFile state-kibana-nginx-errorlog
$InputRunFileMonitor

$InputFileName /var/log/nginx/kibana/access.log
$InputFileTag kibana-nginx-accesslog:
$InputFileStateFile state-kibana-nginx-accesslog
$InputRunFileMonitor

$InputFilePollInterval 10

if $programname == 'kibana-nginx-errorlog' then @@logstash.basefarm.com:5544
if $programname == 'kibana-nginx-errorlog' then ~
if $programname == 'kibana-nginx-accesslog' then @@logstash.basefarm.com:5544
if $programname == 'kibana-nginx-accesslog' then ~

Step 36: restart rsyslog
$ sudo service rsyslog restart

This is it everything should be working now 🙂 you should now be seeing something like this if you go to your Logstash website:

Some more information about the rsyslog config:
“$InputFileName” Here you specify the log you want to sent to logstash
“$InputFileTag” This is the name you will see in logstash

I think by seeing the Nginx example you will get the picture and can change it so it will work for any kind of logs you would like to ship to Logstash. Please remember to add the “if $programname” two times and the second time it has to end with “then ~”if you do not do this, you will spam your “/var/log/messages”.

There is another way to ship logs from the Logstash server itself you can alter the configuration file from “/etc/logstash/conf.d/logstash.conf” to directly read the log files. You will need to change the “input” to something like this:
input {
syslog {
type => syslog
port => 5544
codec => plain { charset => “ISO-8859-1” }
}

file {
type => “syslog”
path => [ “/var/log/nginx/kibana/*.log”, “/var/log/nginx/error.log” ]
}
}

filter {
mutate {
add_field => [ “hostip”, “%{host}” ]
}
dns {
reverse => [ “host” ]
action => “replace”
}
}

output {
elasticsearch {
host => “localhost”
}
}

Remember this part only works from the Logstash server itself. It is just a way to avoid using ryslog on the Logstash server itself.

*problems that could occur*
There is a bug in Logstash currently that it can only handle utf8 if your log is different then this it will crash Logstash a workaround is as you can see above to add the following:

codec =>; plain { charset =>; “ISO-8859-1” }

The below information is only required probably if you use selinux and a firewall, I had this not enabled in my virtual machines so you might need to double check the below.
EXTRA INFORMATION: Fix selinux and firewall
$ sudo chcon -R -t httpd_sys_content_t /usr/share/nginx/kibana/public/
$ sudo semanage port -a -t http_port_t -p tcp 9200
$ sudo iptables -A INPUT -p tcp -m tcp --dport 5544 -j ACCEPT
$ sudo iptables -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT
$ sudo iptables -A INPUT -p tcp -m tcp --dport 9200 -j ACCEPT
$ sudo /sbin/service iptables save

I hope this guide has helped you if you see any mistakes or have improvements please give me a reply and I will update the guide accordingly I am always happy to hear improvements.

How to SSH into a VirtualBox Linux guest from your host machine

This small guide will show some easy steps of how to ssh into a VirtualBox guest os from your host machine.

The guide below assumes you already have a machine created, if you have not done so then skim through the guide and do the steps at the creation of the machine and the result will be the same.

This guide is based on a Fedora 18 virtual machine and should work on any other Linux operating system, the locations and commands I use might be different if you use a non red hat based system.

Step 1: Stop your VirtualBox machine
Not much to explain here – stop your virtual machine as the change would require a restart of the os.

If you use VirtualBox on a Mac OS system you first have to create a secondary adapter in the main VirtualBox setting, if you use windows you can just continue with step 2.

Step 2: Go to the settings of your machine
Right click your virtual machine and choose “Settings” alternative click on the machine and press ctrl+s.

Step 3: Click on the network tab
Just have a look at the picture below if you are lost.

Step 4: Fill in the “Adapter 2” information
We will now make it so that the os has a secondary network card that connects to your host only.
All you have to do is make Adapter 2 look like the picture below, if you already use multiple network cards you can simply use Adapter 3 or 4, this is no problem at all.

Step 5: Start your machine
If you did step 4 correctly you can simply start your machine and wait for your linux host to get started.

Step 6: Find your ip information
In order to ssh into your machine we would need to find the ip information, please run one of the following commands: “ifconfig” or “ip addr”.
$ ip addr

You should then see something like this:

If you look at my example the ip address you need is the bottom one “192.168.56.102” ofcourse this can differ for everyone and there is no way to know your setup so just try the ip addresses you will see and one will work :).

Step 7: Test you can SSH into your virtualbox machine (from your real pc)
Go to your own Linux or Windows machine and start a terminal or Putty and do the following:
$ ssh root@192.168.56.102

Congratulations you can now ssh into your own virtualbox machine from you pc! 🙂 I told you it was easy.

**extra steps** (not necessarily needed)
I think it is not fun at all to find the ip address of your virtual-box, unfortunately if you use DHCP you have no choice to look up your ip every time again, however you could simply give your machine a static ip address and to make it even more easy give this IP a name in your host file.

Nice talking but how would you actually do this? Well have a look below.
Extra Step 1: Find your ip information and network card name.
If you already forgot your ip information then once again find it with “ifconfig” or “ip addr”.
$ ifconfig

Like I explained above you should see something like this:

In this case my network card is called “p7p1” and my ip address is “192.168.56.102”.
(if you do not have ifconfig you can install it with: “sudo yum install net-tools”)

Extra Step 2: open you network card configuration file
$ vi /etc/sysconfig/network-scripts/ifcfg-p7p1

Again have a good look as the vi link above difference of course for you cards name.

Extra Step 3: Change your network card to static ip
In your configuration file add the following or change it (please use your ip information ofcourse).

BOOTPROTO=static
IPADDR=192.168.56.102
NETMASK=255.255.255.0

My file looks as following:

Extra Step 4: Restart your network service
$ systemctl restart network.service

So now that you have a static ip address you can always ssh to that machine! easy enough but now lets make it a little bit more easy and create a name for your ip address.

Extra Step 5: Open your host file (from your real pc not virtualbox)
Fedora 18: (or mac os)
$ vi /etc/hosts
Windows 8:
notepad C:\Windows\System32\Drivers\etc\hosts

Extra Step 6: add your ip and give it a name (from your real pc not virtualbox)

192.168.56.102 vb2

My hostfile for example looks as following:

Right now you could for example ssh into you machine like this:
$ ssh root@vb2

I hope you found it interesting to read. If you have some tips or suggestions of how to do this easier feel free to give a reply and I’ll update the guide with any helpful information.