VMworld 2013 in Barcelona

2013-10-16 18.05.11Basefarm participated as an exhibitor at Vmworld 2013 in Barcelona for the second time. In addition to having a booth at the VMware service provider pavilion, we also had the pleasure of taking part in a panel debate about VMware products together with one of our customers. Our business developer in Sweden, Stefan Månsby, represented Basefarm in the panel together with the former CIO from the Norwegian State Educational Loan Fund. VMware increased the focus on Service providers like Basefarm at VMworld this year, and even included the Basefarm logo in one of the key note presentations 🙂

basefarm-vmworld2013-1So far there has not been reported of any other nordic based companies participating as an exhibitor or VMware partner at VMworld. We are happy with the exposure and the interesting people we have meet at the booth this year. Additionally there were also participants from Basefarm at VMworld solely to focus in the latest developments in VMware technology.

Thanks to all of you who came by our booth! We had many interesting discussions and hope to meet you again in the future!

2013-10-15 13.09.56

Mozilla Vulnerabilities

Mozilla developers identified and fixed several memory safety bugs in the browser engine used in Firefox and other Mozilla-based products. Some of these bugs showed evidence of memory corruption under certain circumstances, and we presume that with enough effort at least some of these could be exploited to run arbitrary code.

Because of the nature of these vulnerabilities, it is recommended to update your software as soon as possible!

More information: http://www.mozilla.org/security/announce/2013/mfsa2013-93.html

BF-SIRT Newsletter 2013-43

Anyone using Apple products needs to be sure to apply the latest updates that are now available, as per Apple security updates.
If you are using Cisco ASA for VPN then you can have a look at our post about that here.
WordPress also updated their software to 3.7, and it’s recommended to apply this.

Top 5 Security links
Group Leveraging Cutwail Spam Botnet Opts For “Magnitude” Over BlackHole Exploit
Hacker Group Claims To Have Looted $100k Via SQL Injection Attack
Doctors Disabled Wireless In Dick Cheney’s Pacemaker To Thwart Hacking
Dropbox Users Hit With Zeus Phishing Trojan
Cisco Says Controversial NIST Crypto ‘Not Invoked’ In Products

Top 5 Business Intelligence links
Universities Schooled By Malware
DARPA Slaps $2m On The Bar For The ULTIMATE Security Bug KILLER
Google Launches Project Shield To Defend Sites Against DDoS Attacks
UN Nuclear Regulator Infected With Malware
India Tops APAC Ransomware Table With $4 BILLION Losses

BF-SIRT Posts
WordPress 3.7 “Basie”
Cisco ASA VPN Denial of Service Vulnerability
Apple security updates

WordPress 3.7 “Basie”

WordPress 3.7 has now been released and it includes quite a few updates that are related to security and maintenance.

More information: http://codex.wordpress.org/Version_3.7

How we went from 40 to over 35 000 services

This is our story of how Basefarm went from handle the operations of 40 to over 35 000 services, reaching over 40 million end users around the world. What’s the secret behind our success?

Born out of the IT bubble ashes
When we founded Basefarm in 2000, we wanted to support companies and organizations that wanted to build their success through the Internet. We had a strong belief that Internet would still exist and it might be strange to hear that today, but after the IT bubble, no one knew what was going to happen with the Internet in the future. We also believed that Internet would be a market place for businesses in the future. It turns out we were right. In 2000, only 5% of the world’s population had access to the internet. Today over 40 % have access, and more and more services are available online for companies and the end users.

How we distinguish ourselves from our competitors
We have always had a unique profile from day one; we’ve focused on Application Management for mission critical business applications. Our other competitors usually have a different focus, they started companies that focused on server hosting. In that way you build two completely different solutions. We built everything from the principal that everything should work at all times, but you should still be able to do changes without affecting the end user experience.

Don’t be a coward, dare to be brave
We did something radical on the financial side. We focused on our customers first and the price later. By that time this was a new thinking in the IT-industry and something fascinating. We have always been brave and had the approach that you should ensure the business and the customers needs first, before you need to invest.

Always looking ahead
Today, over ten years later, we are still specialized in mission critical business applications and we see ourselves as experts within our field. There is always a need for experts and this is one of our key success factors. We didn’t wanted to be like the other start-up companies in the early 2000 that had a wider business focus. We decided to make the best butter and to accomplish that we had to specialize ourselves to succeed in our role as the technical expert.

And that’s the secret behind our success of how we have went from handle the operations of 40 to over 35 000 services. We still grow and so do our services, in line with the technical development. Today, over ten years later, new technologies have emerges on different platforms and devices, but we will always have our original approach. It’s still about passionate people, taking pride in our customers success.

4 tips to succeed

  • Everything should be recyclable
    Place everything into systems to avoid spending time manually doing things more than once. You should be able to half the delivery time when you do it again.
  • Be able to answer why something work
    If you can answer that in a system context, you can also ensure fixing something if it would break.
  • One Basefarm
    Use what we call the ”One-thinking”: one platform, one product, one responsible and one service desk to be focused on the right things.
  • Make the best butter
    Be brave and make sure you are aware of what you have to accomplish to make the best butter.

Cisco ASA VPN Denial of Service Vulnerability

A vulnerability in the VPN authentication code that handles parsing of the username from the certificate on the Cisco ASA firewall could allow an unauthenticated, remote attacker to cause a reload of the affected device.

The vulnerability is due to parallel processing of a large number of Internet Key Exchange (IKE) requests for which username-from-cert is configured. An attacker could exploit this vulnerability by sending a large number of IKE requests when the affected device is configured with the username-from-cert command. An exploit could allow the attacker to cause a reload of the affected device, leading to a denial of service (DoS) condition.

More information: http://tools.cisco.com/security/center/content/CiscoSecurityNotice/CVE-2013-5544

Apple security updates

Apple have released security updates for the following applications:
iTunes 11.1.2
Apple Remote Desktop 3.7
Apple Remote Desktop 3.5.4
Keynote 6.0
Safari 6.1

They have also released the following Operating System updates.
OS X Mavericks v10.9
OS X Server 3.0
iOS 7.0.3

These updates fixes more than a hundred security vulnerabilities, with many being labeled as critical, and it’s highly recommended to apply them as soon as possible!

BF-SIRT Newsletter 2013-42

This week, Akamai has released their latest “State of the Internet” report, and as always it’s a worthwhile reading. There is also a lot of sites which has been attacked by a 0day vBulletin Hole.
If you are using Oracle products you should have a look at our blog post regarding the latest Oracle vulnerabilities

Top 5 Security links
Thousands of Sites Hacked Via vBulletin Hole
FBI Silk Road shutdown will have little impact on Bitcoin cyber rackets
Digital ship pirates: Researchers crack vessel tracking system
New malware enables attackers to take money directly from ATMs
Hackers compromise certs to spread Nemim malware, which hijacks email and browser data

Top 5 Business Intelligence links
Akamai Releases “State of the Internet” Report for Q2 2013
DDoS attack size accelerating rapidly
NORKS cyber mayhem cost South Korea £500 Million
Security Spending Continues to Run a Step Behind the Threats
Breach at PR Newswire Tied to Adobe Hack

BF-SIRT Posts
Oracle fixes vulnerabilities

Oracle fixes vulnerabilities

Oracle have released fifty one vulnerabilities, where twelve are critical.

Oracle Java SE: 51
Oracle Database Server: 4
Oracle Fusion Middleware: 17
Oracle Enterprise Manager Grid Control: 4
Oracle E-Business Suite: 1
Oracle Supply Chain Products Suite: 2
Oracle PeopleSoft Products: 8
Oracle Siebel CRM: 9
Oracle iLearning: 2
Oracle Industry Applications: 6
Oracle Financial Services Software: 1
Oracle Primavera Products Suite: 2
Oracle and Sun Systems Products Suite: 12
Oracle Virtualization: 2
Oracle MySQL: 12

More information: http://www.oracle.com/technetwork/topics/security/cpuoct2013-1899837.html

How to install Logstash with Kibana interface on RHEL

This post is currently outdated, please have a look here to see a up to date version:
https://community.ulyaoth.net/threads/how-to-install-logstash-kibana-on-fedora-using-rsyslog-as-shipper.11/
This guide will be updated as soon as possible.

In this guide I will provide an example of how to set up a Logstash server with a Kibana interface that does get the logs from rsyslog. While there are multiple other ways to get logs into Logstash I will focus in this guide on rsyslog only.

I am aware that in the new Logstash rpm everything such as Kibana is merged into one package, But I feel personally it is better to install things separate as this gives you the possibility to update certain parts when you want without having to wait for a new rpms.

If you are going to use this in a production environment then please make sure to check the security implications of going the rsyslog way as you would need to open a port. So unless you are in an internal network everyone will be able to ship logs to your Logstash server.

So what is Logstash!?:
“Logstash is a tool for managing events and logs. You can use it to collect logs, parse them, and store them for later use (like, for searching). Speaking of searching, Logstash comes with a web interface for searching and drilling into all of your logs.”

There are a lot of examples on the official Logstash so I definitely recommend having a look there!
Their website: http://www.logstash.net

Now let’s start, for this guide I will be using the following programs:
RHEL (I am using RHEL 6 for this guide)
Logstash
rsyslog
ElasticSearch
Nginx
Kibana

Step 1: Install Logstash
$ sudo yum localinstall https://download.elasticsearch.org/logstash/logstash/packages/centos/logstash-1.4.1-1_bd507eb.noarch.rpm

Since the RHEL 6 Nginx version is pretty old I will install a more upstream version from the Nginx website.

Step 2: Install the Nginx yum repository
$ sudo yum localinstall http://nginx.org/packages/rhel/6/noarch/RPMS/nginx-release-rhel-6-0.el6.ngx.noarch.rpm

Step 3: Add the official ElasticSearch repository for Version 1.1.x
$ sudo vi /etc/yum.repos.d/elasticsearch.repo

Step 4: Add the following content to this file
[elasticsearch-1.1]
name=Elasticsearch repository for 1.1.x packages
baseurl=http://packages.elasticsearch.org/elasticsearch/1.1/centos
gpgcheck=1
gpgkey=http://packages.elasticsearch.org/GPG-KEY-elasticsearch
enabled=1

If you have the supplementary repository of RHEL6 enabled please use 5a which I recommend as
it will use the official oracle jave, if you do not use this please use 5b.
Step 5a: Install all required packages (with the supplementary repository)
$ sudo yum install java-1.7.0-oracle elasticsearch nginx rsyslog tar wget vim policycoreutils-python zip

Step 5b: Install all required packages (without supplementary repository)
$ sudo yum install java-1.7.0-openjdk elasticsearch nginx rsyslog tar wget vim policycoreutils-python zip

Step 6: Go to the Logstash config directory
$ cd /etc/logstash/conf.d

Step 7: Download the following Logstash config file
$ sudo wget http://trash.ulyaoth.net/trash/logstash/conf/logstash.conf

Step 9: Change the ownership of the Logstash config file
$ sudo chown logstash:logstash logstash.conf

Step 10: Create the following directories:

$ sudo mkdir -p /var/log/nginx/kibana
$ sudo mkdir -p /usr/share/nginx/kibana/public
$ sudo mkdir -p /etc/nginx/sites-available
$ sudo mkdir -p /etc/nginx/sites-enabled

Step 11: Go to the nginx directory
$ cd /etc/nginx/

Step 12: Delete the current nginx.conf
$ sudo rm -rf nginx.conf

Step 13: wget a new nginx.conf
$ sudo wget http://trash.ulyaoth.net/trash/nginx/conf/nginx.conf

Step 14: Open the new nginx.conf
$ sudo vim /etc/nginx/nginx.conf

Step 15: Change the following line to fit your cpu amount
worker_processes 1;
worker_connections 1024;

I have two virtual CPUs so I use:
worker_processes 2;
worker_connections 2048;

I feel personally there is not much point going above 4 worker processes however opinions are split about this.
Just save the file after you added your changes.

Step 16: Go to the nginx vhost directory
$ cd /etc/nginx/sites-available/

In this up to date guide I will use the official kibana vhost that kibana provides you can see it here:
https://github.com/elasticsearch/kibana/blob/master/sample/nginx.conf

However I added a few small changes, so you can use the above official one or use my updated one see what fits you, I will
be using mine that is based on the official one for this guide.

Step 17: wget the kibana vhost file
$ sudo wget http://trash.ulyaoth.net/trash/nginx/vhost/kibana

As you can see in my vhost file I disabled the password protected endpoints you can enable them by removing the # and by creating a password file. (not part of this guide)

Step 18: Open the kibana vhost file
$ sudo vim /etc/nginx/sites-available/kibana

Step 19: Change the site name
Simply change the “logstash.ulyaoth.net” to whatever your Logstash url will be and save the file.

Step 20: Symbolic link the vhost file so nginx will load it
$ sudo ln -s /etc/nginx/sites-available/kibana /etc/nginx/sites-enabled/kibana

Step 21: go to the kibana folder
$ cd /usr/share/nginx/kibana/public

Step 22: Download the latest Kibana version
$ sudo wget https://download.elasticsearch.org/kibana/kibana/kibana-latest.tar.gz

Or if you are like me you can get a newer version directly from their GitHub. (can be experimental)
$ sudo wget https://github.com/elasticsearch/kibana/archive/master.zip

Step 23: Untar Kibana and fix directory stucture

$ sudo tar xzfv kibana-latest.tar.gz
$ sudo mv kibana-latest/* .
$ sudo rm -rf kibana-latest.tar.gz
$ sudo rm -rf kibana-latest

If you did download the “master.zip” file you will need to do the following instead:

$ sudo unzip master
$ sudo mv kibana-master/src/* .
$ sudo rm -rf master
$ sudo rm -rf kibana-master

Step 24: Open the config.js file
$ sudo vi config.js

Step 25: Change the file slightly
Change the following line:
default_route : '/dashboard/file/default.json',

To the following:
default_route : '/dashboard/file/ulyaoth.json',

If you want to use another dashboard simpely change the “ulyaoth.json” part.

I changed it myself to “basefarm.json”.

Step 26: Go to the dashboard directory
$ cd /usr/share/nginx/kibana/public/app/dashboards

Step 27: Download the following version of logstash.json
$ sudo wget http://trash.ulyaoth.net/trash/kibana/dashboard/ulyaoth.json

My version is identical to the official one except I changed the graph size and how the logs show.

Since I changed the name of the file in config.js to basefarm.json I would need to rename the file.

Step 28: Rename the file
$ sudo mv ulyaoth.json basefarm.json

Step 29: open basefarm.json (or whatever name you used)
$ sudo vi basefarm.json

Step 30: Change the site name
Change the following line:
“title”: “Ulyaoth: Logstash Search”,

Change the bit “Ulyaoth: Logstash Search”,to whatever you would like to name your Kibana interface site and save the file.
For me it is currently called “Basefarm: Logstash Search”.

Step 31: Create a nologin user called kibana
$ sudo useradd -s /sbin/nologin kibana

Step 32: Chown the web dir to kibana:nginx
$ sudo chown -R kibana:nginx /usr/share/nginx/kibana/

Step 33: Start Logstash, ElasticSearch and Nginx

$ sudo service elasticsearch start
$ sudo service logstash start
$ sudo service nginx start

If you now go to your website for example for me “http://logstash.basefarm.net” you will see something like this:

Logstash is a product that is always in development so the screenshot above is outdated probably by now as they keep changing the interface, I would advise to keep my version of the interface as a grain of salt and experiment yourself with how you want to look.

You can do so by playing around with the dashboard files, everyone has his or her own taste so I decided to not make this part of my guide but just focus on how to install it.

Of course there is no data so let us move forward and do the rsyslog configuration that will ship the specific logs to your Logstash server.

Step 34: Create the rsyslog logstash file
$ sudo vi /etc/rsyslog.d/logstash.conf

Step 35: add the logs you want to ship (nginx example)

$ModLoad imfile

$InputFileName /var/log/nginx/kibana/error.log
$InputFileTag kibana-nginx-errorlog:
$InputFileStateFile state-kibana-nginx-errorlog
$InputRunFileMonitor

$InputFileName /var/log/nginx/kibana/access.log
$InputFileTag kibana-nginx-accesslog:
$InputFileStateFile state-kibana-nginx-accesslog
$InputRunFileMonitor

$InputFilePollInterval 10

if $programname == 'kibana-nginx-errorlog' then @@logstash.basefarm.com:5544
if $programname == 'kibana-nginx-errorlog' then ~
if $programname == 'kibana-nginx-accesslog' then @@logstash.basefarm.com:5544
if $programname == 'kibana-nginx-accesslog' then ~

Step 36: restart rsyslog
$ sudo service rsyslog restart

This is it everything should be working now 🙂 you should now be seeing something like this if you go to your Logstash website:

Some more information about the rsyslog config:
“$InputFileName” Here you specify the log you want to sent to logstash
“$InputFileTag” This is the name you will see in logstash

I think by seeing the Nginx example you will get the picture and can change it so it will work for any kind of logs you would like to ship to Logstash. Please remember to add the “if $programname” two times and the second time it has to end with “then ~”if you do not do this, you will spam your “/var/log/messages”.

There is another way to ship logs from the Logstash server itself you can alter the configuration file from “/etc/logstash/conf.d/logstash.conf” to directly read the log files. You will need to change the “input” to something like this:
input {
syslog {
type => syslog
port => 5544
codec => plain { charset => “ISO-8859-1” }
}

file {
type => “syslog”
path => [ “/var/log/nginx/kibana/*.log”, “/var/log/nginx/error.log” ]
}
}

filter {
mutate {
add_field => [ “hostip”, “%{host}” ]
}
dns {
reverse => [ “host” ]
action => “replace”
}
}

output {
elasticsearch {
host => “localhost”
}
}

Remember this part only works from the Logstash server itself. It is just a way to avoid using ryslog on the Logstash server itself.

*problems that could occur*
There is a bug in Logstash currently that it can only handle utf8 if your log is different then this it will crash Logstash a workaround is as you can see above to add the following:

codec =>; plain { charset =>; “ISO-8859-1” }

The below information is only required probably if you use selinux and a firewall, I had this not enabled in my virtual machines so you might need to double check the below.
EXTRA INFORMATION: Fix selinux and firewall
$ sudo chcon -R -t httpd_sys_content_t /usr/share/nginx/kibana/public/
$ sudo semanage port -a -t http_port_t -p tcp 9200
$ sudo iptables -A INPUT -p tcp -m tcp --dport 5544 -j ACCEPT
$ sudo iptables -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT
$ sudo iptables -A INPUT -p tcp -m tcp --dport 9200 -j ACCEPT
$ sudo /sbin/service iptables save

I hope this guide has helped you if you see any mistakes or have improvements please give me a reply and I will update the guide accordingly I am always happy to hear improvements.