High Risk Java vulnerability

There is currently an extremely high risk Java vulnerability out in the wild that can potentially cause havoc for a lot of users and systems. All someone has to do is get you to visit a site with the bad code, which can then run an exploit kit on your system under the same user as the Java process, which means they’ll most likely be taking over your entire system.

This is not only relevant for sysadmins, but for anyone being connected to the internet. A website you open could potentially have the code on it, and the person would then have access to your PC to install key loggers, or whatever they want – which could be used to breach not only your own PC but your corporate network.

There is currently no fix for this issue, which is why it’s highly recommended to disable the Java plugin in your browsers. If you need to use Java Applets, then it’s suggested to use NoScript with Firefox as you can then whitelist sites you wish to use Java on, and block it on the rest.

You can find more information here:
https://www.us-cert.gov/cas/techalerts/TA12-240A.html
http://www.kb.cert.org/vuls/id/636312

High Risk Java Vulnerability

There is an extremely high risk exploit out that can potentially cause havoc for a lot of users/systems. All someone has to do is get you to visit a site with the bad code, which will then run an exploit kit under the same user as the Java process which means they’ll most likely be taking over your entire system.
This is not only relevant for sysadmins, but for anyone being connected to the internet. A website you open could potentially have the code on it, and the person would then have access to your PC to install key loggers, or whatever they want.

There is currently no fix for this issue, which is why it’s highly recommended to disable Java in your browsers. If you need to use Java Applets then it’s suggested to use a secondary browser or virtual environment to be used only with this.

You can find more information here:
https://www.us-cert.gov/cas/techalerts/TA12-240A.html
http://www.kb.cert.org/vuls/id/636312

Generate passwords using PowerShell

The other day I needed to generate some 1400+ new user passwords. Being a lazy person I figured that PowerShell could rescue me. This is what I did to check that my idea worked:

PS C:\> Add-Type -AssemblyName "System.Web"
PS C:\> [System.Web.Security.Membership]::GeneratePassword(10,2)
35&OjFtM^k

As you can see this generates a password that is 10 characters in length and contains at least 2 non-alphanumeric characters.  Now all I needed was to iterate this 1400 times and then output the result to the clipboard, simple as pie:

PS C:\> 1..1400 | % { [System.Web.Security.Membership]::GeneratePassword(10,2) } | clip

And that is a 1400 new passwords stored in the clipboard. I can now paste these or pipe them into a set password routine.

Configuring Windows Server 2008 R2 Features

At Basefarm we frequently need to ensure that many Windows servers are identical in terms of the roles and features they have installed. Adding features can be done in a number of ways. Mostly the graphical userinterface (Server Manager) is used. Or for large operations System Center or similar. I will show you how this can be done more easily using the command line. This method doesn’t require anything beyond Windows Server 2008 R2 (or later) and PowerShell.

The Server Manager module

The Server Manager module (introduced with Windows Server 2008 R2) has three very useful commands, they are:

  • Add-WindowsFeature
  • Get-WindowsFeature
  • Remove-WindowsFeature

Using these is simple. Start a PowerShell session with administrative privileges (Run As…) . Then check that the Servermanager module is available in your server:

PS C:\> Get-Module -ListAvailable

Get-Module -ListAvailable

This shows that the Server Manager module is available on our server but that it is not yet loaded into the PowerShell session. To load it (and make its commands available):

PS C:\> Import-Module Servermanager

Now the commands of the Server Manager module are available to you. Check which commands are exposed by the module:

PS C:\> Get-Command -Module Servermanager

Ok, we’re all set. Let’s use these commands!

HOWTO: Document what is installed

To see what is installed in a server use:

PS C:\> Get-WindowsFeature

Get-WindowsFeature

ooops, that’s a lot of text flying by on the screen! As you probably can guess only lines with [X] are installed. So we need to filter the list to only show what is actually installed, try this instead:

PS C:\> Get-WindowsFeature | ? { $_.Installed }

Get-WindowsFeature-installed

A nice clean list showing which features are installed on the server ;-), perfect for documenting your server(s)

HOWTO: Clone installed features to another server

As shown above it’s easy to list what is installed. But just having this list on the screen doesn’t make much sense, we need to be able to store this in a structured way so that we can use the list on another server to install the same features. PowerShell makes this very simple. We use the Export-CliXml cmdlet to save the information in a structured XML file:

PS C:\> Get-WindowsFeature | ? { $_.Installed } | Export-Clixml .\features.xml

The output from the Get-WindowsFeature cmdlet is saved in a structured way in the XML file features.xml. This file can now shared to other servers and used as input for the Add-WindowsFeature cmdlet!

HOWTO: Add features from another server (using XML file)

Start PowerShell with administrative privileges.  Now try this:

PS C:\> Import-Module Servermanager
PS C:\> Import-Clixml .\features.xml

Now you have the same list of installed features on the new server. But… this is simply a list in memory and on screen. The features haven’t been added yet. In order to do that we need to pipe the information into the Add-WindowsFeature cmdlet.

Before I show you how to do that there is one important thing I need to explain. When we exported the list of installed features we included all features that were marked as installed. As you saw in the output this resulted in a tree like structure where “[X] Web Server (IIS)” was on the top followed by “[X] Web Server” and so on.

That looks fine but if we use this as input for the Add-WindowsFeature cmdlet we will end up with more than we asked for. The reason is that when the top level feature such as “Web Server (IIS)” is choosen everyting underneath it will also be installed. And in order to keep our servers a lean as possible we do not want this! We need to go back and filter the output of Get-WindowsFeatre a little more. Try this instead of what I showed you earlier:

PS C:\> Get-WindowsFeature | ? {$_.Installed -AND $_.SubFeatures.Count -eq 0 }

Now the output will only contain information from the bottom-up so to speak. This works fine as input for the next server we want to make identical. Save the new list to a file:

PS C:\> Get-WindowsFeature | ? {$_.Installed -AND $_.SubFeatures.Count -eq 0 } | Export-Clixml .\features.xml

Now we can finally install these features in the new server:

PS C:\> Import-Clixml .\features.xml | Add-WindowsFeature

Est Voilá! The two servers now have the same Windows features installed.

As always with PowerShell, if your environment enables PowerShell remoting these commands could be executed on any number of servers from a single commandline. A Power(full)Shell that is!

Summary

This became a longer post than I intended simply because I wanted to explain the details about filtering the export. Here’s a Quick summary of the commands you use to export what is installed:

PS C:\> Import-Module Servermanager
PS C:\> Get-WindowsFeature | ? {$_.Installed -AND $_.SubFeatures.Count -eq 0 } | Export-Clixml .\filename.xml

Copy the file ‘filename.xml’ to a network share or other location where the next server can reach it, then do this on the other server:

PS C:\> Import-Module Servermanager
PS C:\> Import-Clixml .\filename.xml | Add-WindowsFeature

All features are installed on the new server without having to click-around in the graphical server manager! To verify what is installed quickly use:

PS C:\> Get-WindowsFeature | ? { $_.Installed }

I hope I have showed you that PowerShell is much better than giving your arms RSI using the mouse to handle feature installations!

Defcon 20

Wednesday

Flight over Greenland

Flight over Greenland

This year, my colleague Jens and I were given the opportunity to visit Defcon 20 (https://www.defcon.org/html/defcon-20/dc-20-index.html) in Las Vegas. It was my first time visiting the US, so I was obviously very excited about it!

We started off around noon on Wednesday, and after having a transfer at Heathrow, London, we arrived to Las Vegas at 7 PM on the same Wednesday (due to Las Vegas being 9 hours earlier compared to Sweden).

Inside the terminal, the AC made it seem almost chilly at times, but once you went out to the taxi queue, you were greeted by a 45 degrees heat wave. The first thing that came to mind when going towards the hotel was how extremely big everything was, even compared to cities such as Shanghai. Once checked in at the hotel, I quickly drifted off to sleep as I had forced myself to stay awake on the plane in order to avoid as much jet lag as possible.

Las Vegas

Las Vegas

Thursday

Defcon Queue

Defcon Queue

Thursday morning, around 40 degrees outside at 8 AM when we made our way to the convention. Felt quite lucky in the cab when I saw actually walking the trek towards the convention in the blistering heat. When we arrived, we noticed that the queue started outside, not so good. The queue moved forward though, so we assumed we’d be able to pay the entrence fee once we got roof over our heads. Bad assumption. Once inside, the queue went on for about 2,5 hours more, and that’s when we were there 30 minutes prior to the desks opening. Lesson learned for next time.

 

Defcon Badge

Defcon Badge

Once we had paid the entrance fee, we were given the badges for the 20th Defcon, and they were mighty impressive. Rather than having a normal badge (which is never the case for Defcon, but still), you were given a badge containing a multi-core processor, IR transmitter, LEDs, usb-mini port, PS2/VGA ports that can be soldered on and open source software that contained a good variety of competitions for those who wanted to play around with cryptos. Certain badges could also ”infect” other badges, making the LEDs blink differently if you came in contact with them.

The amount of text you could write about these badges are probably enough to fill a book, but I suggest you check out the following resources for more information about the badges:
http://www.wired.com/threatlevel/2012/07/defcon20-badge/
http://forums.parallax.com/showthread.php?141494-Article-Parallax-Propeller-on-DEF-CON-20-Badge-Start-Here
!

Next in line was getting some food, and there was a nice ”chill out zone” where you could buy hot and cold food, drinks, breakfast and other vital things for your every day life.

Having refuled, we decided to get some swag to bring home. This turned out to be another 2 hour long queue to the single only shop they had for official merchendise. Eventually I ended up getting two t-shirts as a memory.

Defcon Merchendise

Defcon Merchendise

Later on we got into the first conference, which was the starting ceremony where everyone was welcomed to the 20th Defcon!

Since it was the registration day, we managed to get out earlier than usual, and used the time for a trip to the Grand Canyon, which has been one of my most wanted locations to see for quite a while. Due to the large time contraints, we had to take a helicopter ride, which in itself was quite an adventure!

At Grand Canyon

At Grand Canyon

Helicopter over Hoover Dam

Helicopter over Hoover Dam

 

Once back, we decided to do some sightseeing in the area next to the hotel.

Jens in front of the Bellagio Fountains

Jens in front of the Bellagio Fountains

Walking on the strip

Walking on the strip

Friday

One of the talks

One of the talks

First ”real” day of the conference! I started off with some talks about the badge and the history of Defcon to get some further ideas about how things had progressed. I found it very interesting and that it had a lot of ”unofficial information” about how things had been, even though I have wanted to go to Defcon for a long time and read a lot about it throughout the years. There was also the talk with General Keith B. Alexander (US Cybercom director and NSA Director) which proved well interesting to hear, as he talked about how important it is to secure the country as a whole from outside attacks. The talk after that was called ”Owning One to Rule Them All”, where the talker went through Microsoft SCCM and how it was possible to compromise it and make it send a payload decided by you to all clients that’s connected to it (which means by adding your trojan or whatever you’d be able to very quickly infect an entire network of computers).

Also, as you walked around, you noticed more and more competitions around the place. On the floor, there were multiple puzzles and crypto challanges, and others could be found on posters etc.

One of the puzzles

One of the puzzles

During the evening we went out to have another look at the surrounding area, and ended up eating at a place, called Johnny Rockets, that had amazing burgers. We also went to check out the opening ceremony of the Olympics!

Outside the Hotel

Outside the Hotel

On the strip!

On the strip!

 

Olympic Games Opening Ceremony

Olympic Games Opening Ceremony

On the strip!

On the strip!

Saturday

Defcon talks

Defcon talks

Today was a mix of talks concerning the future of the net and what limitations should or should not be in place, how government agencies operate, and how attacks on our infrastructure are being done. The more ”practical” talks were regarding botnets and how they are being operated through webpages or irc servers, and various ways of how DDoS are being done on companies and how it can be mitigated.

Today I also walked around a bit on the other parts of the convention! For example, I visited the CTF area where teams are competing against each other for securing their own servers in order to prevent other teams to compromise their running services, but they are also supposed to take over other teams servers in order to gain points. There was also the wall of sheep area, where traffic that had been sniffed on the network (non-SSL-traffic) were posted on a a big screen for shame and for others to see.

Competition room

Competition room

Competition room

Competition room

 

The vendor area on the other hand was a place of business where people gathered up to buy and sell various merchandise, ranging from t-shirts to satellite transmitters. It was also a book signing area with people such as Bruce Schneider, and an area where you could view things as actual Enigma machines.

Bruce Schneider signing books

Bruce Schneider signing books

Enigma Machine

Enigma Machine

There was also the hardware hacking area, an area where you could learn how to create robots, learn how to solder, learn how to make your badge do things it couldn’t when you got it, and a lot of other things.

Hardware Hacking Area

Hardware Hacking Area

Afterwards we went out for some sightseeing and visited the Venitian as well as Treasure Island!

The Venitian

The Venitian

The strip

The strip

Sunday

Metasploit talks

Metasploit talks

Sunday was the last day of the conference, and it contained a variety of talks ranging from new generation port scanners, metasploit examples, how easily certain Huawei routers can be hacked, and Kevin Poulsen talking about his previous experience as well as his book. It was also the closing ceremony with all the contestants getting their prices, with some getting the all-mighty black badge that gives you a life-time free entrence fee to Defcon.

As we hadn’t have time to eat much other than sandwiches or the quick burrito, we decided to hit the buffet at the Bellagio for our last conference evening. The queue took quite a while to process, but it was well worth it with a lot of really great food. Also took a quick stroll down the south of the strip.

Closing Ceremony

Closing Ceremony

Bellagio Buffet!

Bellagio Buffet!

 

Hotel entrance

Hotel entrance

In front of Paris Paris!

In front of Paris Paris!

Monday

Mandalay Bay

Mandalay Bay

Monday was the last day in Las Vegas, as we were supposed to leave for Stockholm again at 8.45 PM. For once, we decided to take a long morning rather than getting up at 7.30 AM, so we met up at 11.00 for checking out and having something to eat. Once that had been sorted, we decided to take a stroll down through all the Casinos south of Bally’s to see what each of them offered. We ended up visiting each one, and also went into the Aquarium of Mandalyn Bay to see some sharks. Once at the airport, we found out that the plane was 3 hours delayed. That in turn, meant we missed our connecting flight in Heathrow which meant we got home after 00:00, which kind of made the next work day feel ”so so” considering the time difference etc. All in all I’d definitely rate this convention the best one I’ve been at! Some of the talks were not very interesting at all, while some were very very good. The two I liked the most were: ”Black Ops” and ”How to Hack All the Transport Networks of a Country”.

You can find the full schedule here: https://www.defcon.org/html/defcon-20/dc-20-schedule.html

The main thing I feel I gained though was ”getting back to basics” rather than being so emerged in the commercial aspect of the IT industry. The experience gave me a lot of reminders about why I started loving computers in the first place!

At the Luxor Entrance

At the Luxor Entrance

Hotel New York New York

Hotel New York New York

 

Building Dreamhack, part two

The next generation network or IPv6

Why change to IPv6
Every device connected to the internet need to have a IP address to be able to communicate.
Today on internet the main OSI layer 3 (network layer) protocol used is IPv4. The IPv4 addresses is 32-bit, in total that makes 4294967296 addresses. The main problem with IPv4 is that it is running out of addresses. By using NAT (Network Address Translation) the problem has been managed.

But the more internet grows, the more you need a long term solution. The long term solution is IPv6. The length of an IPv6 address is 128 bits, in total that makes 4,8 × 10^28 addresses, which is a gigantic amount. One other advantage over IPv4 is that IPv6 has the multicast specification. Multicast is the transmission of a packet to multiple destinations in a single send operation.

Configuring IPv6 address
There are three ways to set a IPv6 address:
-Manual
-DHCPv6
-SLAAC(Stateless address autoconfiguration)
I will describe those three options in detail below:

Manual
You configure the IPv6 address, netmask, gateway and DNS servers manual. This method is preferable on servers and routers where the address needs to be consistent over time.

DHCPv6
The client ask for a IPv6 address, the DHCPv6 server gives the client its IPv6 address, netmask and options like gateway, DNS servers, NTP servers etc. The DHCPv6 server know about which MAC address has what IP and for how long, which means DHCPv6 is statefull.

SLAAC(Stateless address autoconfiguration)
Every IPv6 enabled device has a link-local IPv6 address, this address is used to communicate on the local network. This link-local address is set if the client has IPv6 enabled. The link-local address always begin with FE80:: and is calculated on the client. When the client connects to a network physically, the client will use its link-local address as a source and send a router solicitation message to a specific multicast group on the local network. Every router on the local network will listen to this specific multicast group and answer the clients with a RA (Router Advertisement) message. In the RA there is a prefix specified, the client will use this prefix to calculate its own IPv6 address. The client will use the routers link-local address as the default gateway.

In the initial IPv6 specification there where no way of setting the DNS server using the SLAAC method. Instead there are a M-FLAG in the RA package specifying a DHCPv6 server that the client can connect to get options like DNS, NTP etc. Lately support for setting DNS server using the SLAAC method has been developed, but it is not supported by all operating systems.

IPv6 implementation at Dreamhack
At Dreamhack the client set its IPv6 address, prefix and gateway using the SLAAC method. In the RA there are a DHCPv6 server specified that the client use to set the DNS server.

Example:
1: Client connect to network.
2: Client calculate its own link-local address.
3: Client send out a router solicitation package on the specific multicast group using the link-local address as source.
4: Router listen on the specific multicast group and answer the client with a RA(Router Advertisement) containing the network prefix using its own link-local address as source.
5: The client calculates its own IPv6 address using the prefix in the RA.
6: The client sets the gateway to the routers link-local address.
7: Client ask the DHCPv6 server for the DNS server.
8: DHCPv6 server response with a list of DNS IPv6 servers.
8: Client is IPv6 ready 🙂

Problems with IPv6
Even if your computer, your ISP and you destination is fully IPv6 functional you can have problems with IPv6 because all ISP/routers etc where the IPv6 package travel from, source to destination needs to be IPv6 functional.

Example:
1: Client gets a IPv6 address.
2: Client now has dualstack IPv4 and IPv6 addresses.
3: Client asks the DNS server for the ip of www.youtube.com.
4: DNS server answer with the IPv6 address for www.youtube.com.
5: Client tries to communicate with the IPv6 address of www.youtube.com.
6: If IPv6 is not configured correctly form source to destination the request will timeout when it reaches the none IPv6 ready router/network.
7: Client fallbacks to IPv4.
8: Client is unhappy because request took a long time.

Building Dreamhack, part one

Dreamhack is the world’s largest digital festival and holds the official world record as the world’s largest LAN party in the Guinness Book of World Records. Last event (november 2011) the network had 13 292 uniqe devices connected.

The Dreamhack network team is responsible for planning, building, development, operations and teardown of the network. The team consists of 30 people with a great passion for technology from different companies and universities. The team is divided into four subgroups: core, services, access and logistics. I’m a member of the services group which is responsible for the services required in the network.

Part one: building anycast DNS system supporting IPV4 and IPV6
Anycast is a technology where an (anycast) IP is announced on more than one location using a routing protocol. By doing this the routing protocol thinks that it has multiple routes to the (anycast) IP when in fact there are two different endpoints with the same (anycast) IP. The routing protocol will send the client packages to the endpoint with the shortest path from the client. To achieve high availability you need to be able to remove an endpoint service when errors occur. You can do this by removing the specific route to the broken endpoint from the routing table.

 

In the example image above, the client computer’s request to the anycast IP 9.9.9.9 will be routed to the adns server with the IP 2.2.2.2 because that is the shortest path to the anycast IP 9.9.9.9. If the route saying 9.9.9.9 is reachable via 2.2.2.2 is removed the client’s request will be routed to the server adns with IP 1.1.1.1 instead.

To build our anycast DNS infrastructure at Dreamhack we use Debian GNU/Linux, Bind, iptables, ip6tables and quagga with the routing protocol BGP. We have two anycast DNS servers connected to two different Cisco ASR 9000 routers. On the servers we have loopback interfaces that have the anycast IPV4 and IPV6 address configured. We are then using iptables to forward DNS requests from the interface connected to the routers to the loopback interface. On the servers, bind is handling the DNS requests. To achieve high availability we have built a service which checks if a DNS server is unable to answer 5 different DNS request in a row. If it is, the route to that specific DNS server will be removed from the routing table making all the clients’ DNS request go to the other working DNS server.

Dreamhack anycast DNS design.

During Dreamhack winter 2011, me and my colleague Karl Andersson held a lecture where i discuss the Dreamhack anycast DNS implementation. You can find this presentation on Youtube: Dreamarena Orange – Dreamhack Behind the Scenes.

 

Debugging IIS crashes – default WER location

I was debugging some IIS crashes last week and thought I’d follow up with a few basics here as its a common enough problem. Another time I might write a series of posts on using the windows debuggers in detail and how one can go about this from scratch, but for the moment here’s a quick summary of some basic beginning points. I wrote some other more detailed examples of .NET debugging in the past on my MSDN blog, although these ones use slightly differrent CLR versions and extensions which have since been updated.

Firstly I walked into this situation blind as you often do in such matters. The developers of the application in question told me that they had been experiencing crashes across all their web servers since they last did a code deploy. (Insert questions and comments here about the testing regime which allows this to occur). The windows error logs showed the following in the application event log

Faulting application name: w3wp.exe, version: 7.5.7601.17514, time stamp: 0x4ce7afa2
Faulting module name: MSVCR100_CLR0400.dll, version: 10.0.30319.1, time stamp: 0x4ba2211c
Exception code: 0xc00000fd
Fault offset: 0x0000000000057f91
Faulting process id: 0x11f0
Faulting application start time: 0x01cd29d083c0e51e
Faulting application path: c:\windows\system32\inetsrv\w3wp.exe
Faulting module path: C:\Windows\system32\MSVCR100_CLR0400.dll
Report Id: fdd757b8-95ee-11e1-94a4-005056bc00a6

The key here is the Exception code: 0xc00000fd, which translates as stack overflow (never good!). I pulled the logs and agreed with them in their initial assessment, but they said that they couldn’t find any dumps that had been auto produced. As such I immediately attached debugdiag to one of the web servers to ensure that I could capture a full dump the next time it occurred. However once this was in place I went back through the logs and dug around the server in more detail to check out whether it was really the case that the server had not produced any dumps automatically. Sometimes in Windows 2008 and above WER logging is not particularly transparent in what its doing, so I checked manually. After a short while of searching for .dmp or .mdmp files I noted that the default WER location for these servers was

C:\ProgramData\Microsoft\Windows\WER\ReportQueue

Once I browsed to here I found a treasure trove of old dumps and error logs and all sorts of joy which helped me diagnose the issue. The WER had not written to the event logs that it was taking dumps and collecting information, but all the same I wasn’t surprised to see that it had been doing its stuff since there had been a lot of application crashes. It just goes to show that it’s always worth a look.

In this case the actual debug was fairly simple as a stack overflow crash is pretty simple to debug, it’s just a matter of these steps if you’re familair with windows debuggers:

1. Load the dump
2. set the symbols ensuring you have privates for the customer code
3. load your .net debugger extension (I used psscor4)
4. dump the stack of the thread with the stack overflow exception on
5. send the code to the developers and get them to fix it 🙂

Here’s hoping you don’t encounter any stack overflows yourselves!

A day in the life of a Technical Account Manager

Basefarm is of course constantly on the look out for new talents to join our company, both technical and non-technical.
I’m personally working as a Technical Account Manager (TAM) for Linux customers in Sweden, and thought some of you might find it interesting to read about what it is we do. First of all, let me explain a little bit about what a TAM is.

The role

On our website, you can find the following information:

“As a systems consultant at Basefarm you will be responsible for application management on the Linux platforms for our customers. That means handling monitoring, maintenance, optimization and troubleshooting of applications and OS. The focus is often on the Java-based solutions. We work with open source products, including JBoss, Resin, Tomcat, and php applications sush as wordpress, joomla and drupal. As a systems consultant, you can become a Technical Account Manager for some of the largest and most complex internet sites in Sweden. This means very varied assignments and a fast pace. You will naturally have a close, regular contact with your customers and you are responsible for both further development and maintenance of your customers technology platforms. This requires proactivity and that you and your customers is at the forefront of technology. In your role as a Technical Account Manager you are a key for business success!”

That text, albeit very true, does in my personal opinion boil down to two specific things; that a TAM is someone who is very customer oriented and has a deep wish to constantly evolve and learn new things. These two traits are the key to your success as a TAM. Basically, your days will more than often revolve around these two, because you have a very deep level of cooperation with your customers, and often they will come with a new application or system that you might not have heard about in the past. In our field, personally having previous knowledge is most often not the most important thing, as there will always be applications that almost nobody has heard of. What’s important is that you are able to learn the new things being tossed at you!

The challenge

I work a lot with media companies, who are always on the bleeding edge when it comes to software and technology they use. The applications they want to run are vast, and constantly changing. What you learned today might not be used tomorrow. Due to this, it’s impossible to know everything beforehand. Nobody can know everything, but what’s important is being able to quickly learn as well as adapt to this new technology which they present to you. The same goes with setting up new customers, and this is also one of the tasks I enjoy the most as it offers such a diversity. No new customer is the same, which means there’s always something new to learn!

That said, we do have a very diverse and large team at Basefarm, and there will without doubt be a few people who has worked with the new application your customer has provided you with. This means what the knowledge you’ll need is always around the corner or at most a phone call away (if the knowledge resides in our Norwegian office), and everyone is always more than happy to take a moment to assist you. It is however important to keep in mind that it will be your task to quickly learn this new application, as you will be responsible over it in your customer’s environment.

Can’t be prepared for what’s going to come your way

Customer contact is, as I said, just as important. Each day, you will be speaking with different people at the customer’s which you are TAM for. These conversations can range from anything from presenting ideas on how to improve their current platform, having customer meetings, hosting workshops or discussing issues with developers. I find this very interesting, and also extremely important in order to keep the platform well managed for monitoring and similar tasks.

The job as a Technical Account Manager can be both very challenging and rewarding, mainly because your scope is so big. You will for example work with pre-sale customer meetings, designing customer platforms, be part of implementing that platform, and then also have the on-going responsibility of making sure that the technical platform works as great as it can be. You will also take part of on-going meetings with your customers regarding the technical platform and your suggestions for the future.

In the end, what I like most about the position is that you can’t always be prepared for what’s going to come your way. There are usually no guide lines for how to solve something beforehand, and you have a very close connection to the customer on a day to day basis. Being able to take on new platforms and applications that you have not worked with in the past is a big requirement, as this happens very often. If you feel this challange sounds fun and interesting, send us a mail at rekrytering-se@basefarm.se !

For more information about positions available in Sweden, please visit https://www.basefarm.com/sv/jobb/

Breakfast cloud seminar with Basefarm in Stockholm!

We claim that the cloud uncertainly is a myth. Since the cloud became a household word, studies have indicated that the main concern of the IT department is how the information you store in the cloud is handled in a secure manner. How do you know that the cloud provider doesn’t abuse your information? Welcome to a morning with speakers in security and how to use cloud services in practice with concrete examples.

On May 8, 7.30-9.45 AM, we at Basefarm arrange this breakfast seminar at Grand Hotel in Stockholm. The theme is security in the cloud and with us this morning we have speakers from Truesec, TV4 and Marval. If you think this sounds interesting, you can register via Linkedin or send an e-mail to me at elin.mattsson@basefarm.se.

The seminar is free but the number of places is limited. There are already many registered and it’s first come, first served! More information about the seminar and agenda can be found on Linkedin