Part1: http://trycatch.be/blogs/decaluwet/archive/2014/01/12/consolidated-logging-introduction.aspx

Part: http://trycatch.be/blogs/decaluwet/archive/2014/01/20/consolidated-logging-part-2-windows.aspx

 

During this article we’ll have a look at some general background of syslog, how to setup a central collector on a Linux running rsyslog and how to configure a slave Linux to forward it’s events to the central collector.

 

1. Syslog Background:

Linux systems use syslog as standard logging system much as windows uses the event viewer. By default after installing a linux box the system will start logging events to any number of files located in /var/log.

Syslog was first invented back in 1980’s by Eric Allman as part of the sendmail project. The initiral version syslogd has been superseeded by updated versions of the logging deamon. You’ll find that within the Linux / Unix world there are now two big players syslog-ng and rsyslog. I mostly use Ubuntu and CentOs both with rsyslog. It is however important to know syslog-ng exists as it does require other setup steps when connecting to Loggly cloud connector.

As a windows adminsitrator you might need to get used to the falt file logging and syslog. An important thing to understand when you are working with syslog and learning the Linux lingo is that syslog is actually three things in one:

- An application that collects events

- The name used to reference the data /events on disk

- A Protocola for forwarding events

I setup a linux system in my Azure lab  and when you have a look at the default logging location you see the system creates different log files for different “applications” and for a number of applications you’ll also see files with incremental numbers and the .gz extension being the Linux zip equivalent. Syslog will wrap the log according to your config an zip the older log to conserve space.

image

A syslog message has a number of standerd fields

Facility => Is used to standardise on specifying what type of program logged the massage. There are 23 standard values

Facility Number

Keyword

Facility Description

0

kern

kernel messages

1

user

user-level messages

2

mail

mail system

3

daemon

system daemons

4

auth

security/authorization messages

5

syslog

messages generated internally by syslogd

6

lpr

line printer subsystem

7

news

network news subsystem

8

uucp

UUCP subsystem

9

 

clock daemon

10

authpriv

security/authorization messages

11

ftp

FTP daemon

12

-

NTP subsystem

13

-

log audit

14

-

log alert

15

cron

clock daemon

16

local0

local use 0 (local0)

17

local1

local use 1 (local1)

18

local2

local use 2 (local2)

19

local3

local use 3 (local3)

20

local4

local use 4 (local4)

21

local5

local use 5 (local5)

22

local6

local use 6 (local6)

23

local7

local use 7 (local7)

Severity => indicates how important the message is. There are 8 standard levels.

Code

Severity

Keyword

Description

General Description

0

Emergency

emerg (panic)

System is unusable.

A "panic" condition usually affecting multiple apps/servers/sites. At this level it would usually notify all tech staff on call.

1

Alert

alert

Action must be taken immediately.

Should be corrected immediately, therefore notify staff who can fix the problem. An example would be the loss of a primary ISP connection.

2

Critical

crit

Critical conditions.

Should be corrected immediately, but indicates failure in a secondary system, an example is a loss of a backup ISP connection.

3

Error

err (error)

Error conditions.

Non-urgent failures, these should be relayed to developers or admins; each item must be resolved within a given time.

4

Warning

warning (warn)

Warning conditions.

Warning messages, not an error, but indication that an error will occur if action is not taken, e.g. file system 85% full - each item must be resolved within a given time.

5

Notice

notice

Normal but significant condition.

Events that are unusual but not error conditions - might be summarized in an email to developers or admins to spot potential problems - no immediate action required.

6

Informational

info

Informational messages.

Normal operational messages - may be harvested for reporting, measuring throughput, etc. - no action required.

7

Debug

debug

Debug-level messages.

Info useful to developers for debugging the application, not useful during operations.

Hostname

Timestamp => time stamps are very important to keep an eye on when doing logging in general and certainly consolidated logging to cloud services. You will want to check and ensure the time reference sticks.

Message

 

When you stay on your local machine there is not so much you will typically change to your syslog deamon. However as you read above Syslog is not limited to collecting and storing syslog messages. It can also forward syslog messages to a central server.

When you look at a syslog message on the wire you will see three distinct parts:

1. PRI => Is a number enclosed in brackets. It represents both the FAcility and Severity of the message. This number is an eight bit number. The first 3 least significant bits represent the Severity of the message (with 3 bits you can represent 8 different Severities) and the other 5 bits represent the Facility of the message. both Facility and Severity are values given to the message by the application logging the data and syslog can then filter using this data.

Below is an example of a PRI sniffed on my demo lab. The actual PRI value is <14> however when you break it down to it’s two components you see the facility is 1 being a user message and the severity is 6 informational.

The formula behind this is multiplying the Facility number by 8 and then adding the numerical value of the Severity.

1*8+6=14

image

2. HEADER => this part contains two sub fields

               - Timestamp

               - Hostname and ip address of the device.

3. MSG => This is the rest of the data and will typically contain a TAG field and data content.

Syslog is mostly based on unreliable UDP traffic running on port 514

 

2. Create central collector:

Enough on the background of syslog, by now you should have a good idea of what it does and some of the technology behind it. For your lab we want to configure one Linux server as a syslog collector to behave more or less identical to an eventviewer collector server in windows.

To do this in a “gracefull” way we need to do two things in rsyslog.conf file .

1. configure syslog to accept incomming messages from other servers.

Un comment the values below to activate the listener

# provides UDP syslog reception. For TCP, load imtcp.

$ModLoad imudp

# For TCP, InputServerRun 514

$UDPServerRun 514

2. configure syslog to create a directory for each incomming server within the /var/syslog directory structure.

# This one is the template to generate the log filename dynamically, depending on the client's IP address.

$template FILENAME,"/var/log/%fromhost-ip%/syslog.log"

# Log all messages to the dynamically formed file. Now each clients log (192.168.1.2, 192.168.1.3,etc...), will be under a separate directory which is formed by the template FILENAME.

*.* ?FILENAME

* if you get an error after updating this setting in the syslog regarding can’t create the folder you need to change the value for $PrivDropToUser and $PrivDropToGroup to something that has the right to create folders in the /var/log

** a good way to limit who can forward events off to this collector is by ocnfiguring IPtables to drop all syslog packets from unknow/trusted sources.

3. Forward messages to your collector:

Now that the hard part is done and you central collector is ready to go all you need to do is configure your other linux server to forward data off to your collector. Do do this, simply add the below value to tyour rsyslog.conf file.

*.* @<ip of your collector>:514

*This will forward all you log’s off to the collectore

Part1: http://trycatch.be/blogs/decaluwet/archive/2014/01/12/consolidated-logging-introduction.aspx

In this second part of my series around consolidated logging we’ll be looking at how to consolidate all your windows logs to one single event viewer. I have mentioned in many previous articles and talks that I’m a big fan of creating a management server in your IT environment. Basically this is your one stop shop for managing any and all of your infrastructure. A key benefit can be to consolidate all your events into this server giving you a single view on events throughout your environment.

This is achieved through a mechanism called Event Forwarding or Windows Eventing 6.0 and is nothing new. It’s been with us since the release of Vista and Windows 2008 but is heavily under used in IT infrastructures today. The nice thing about this technology is MS opened it up for backwards compatibility with down-level OS’s like windows XP SP2 and Windows 2003 SP1.

Getting this feature to work is simple and here are the steps.

1. The first thing you need to understand is there are two components to this story and both require setup:

  • Forwarding Server => are all servers that forward their event viewer information to a central collector server.
  • Collector Server => is the server receiving the events from downstream servers.
  • Subscription => is the link between the two systems, what events will be transferred and where will they be stored.

image * diagram for Collector initiated Subscription

2. The second thing you need to understand is that windows event forwarding can be configured in two modes:

  • Collector Initiated Subscription => The collector will request events to sent from the forwarding computers. This is a typical setup for a small environment with only a few servers.
  • Source initiated Subscription => The source computer determines when to send the events. This setup leverages Group Policies for large scale deployment..

For this post we’ll focus on the Collector Initiated Subscription and move to the source subscription in a next post. As there are two components in this setup you are required to execute two commands, one on each computer.

3. Configure the Collector Server >

execute the command wecutil quick-config

image 

4. Configure the Forwarding Server 

execute the command winrm quickconfig

image 

Running this command will start the Windows Remote management service setting the WinRM to delayed start.
Additionally the windows firewall will be reconfigured to allow remote computers to connect to the WinRM service.

Add the collector computer account to the event log readers group on the forwarding computer.

image

5. Manually configure the event forwarding on the collector server. To do this open the event viewer on you collector.

Right click Subscription > Create Subscription

image

Enter a name and Subscription. Ensure the name makes logical sense to you as this will be displayed later on in the Subscription tree.

Choose what destination you want the log’s to arrive into. The default value Forwarded Events is usually the best idea.

image

The radio buttons gives you the option to choose the subscription type. We are using the collector initiated setup.

To add a computer to this collector and test

image

The final step is to configure what events will be forwarded. Here you can really determine the depth of your central collector. Often it’s enough to only collect critical and error’s  from the different event log but this is of course entirely up to you.

image

When you are done the subscription should be active.

image

The forwarded events should appear in your Forwarded Events container or what ever container you specified.

image

 

6.  Once you are done you will want to test your setup

You can force the creation of an event using the command

eventcreate /id 100 /t information /l application /d "Event forwarding Test"

image

7. From the advanced menu you can manage the amount of bandwidth usage during event collection.

image

Normal

This option ensures reliable delivery of events and does not attempt to conserve bandwidth. It is the appropriate choice unless you need tighter control over bandwidth usage or need forwarded events delivered as quickly as possible. It uses pull delivery mode, batches 5 items at a time and sets a batch timeout of 15 minutes.

Minimize Bandwidth

This option ensures that the use of network bandwidth for event delivery is strictly controlled. It is an appropriate choice if you want to limit the frequency of network connections made to deliver events. It uses push delivery mode and sets a batch timeout of 6 hours. In addition, it uses a heartbeat interval of 6 hours.

Minimize Latency

This option ensures that events are delivered with minimal delay. It is an appropriate choice if you are collecting alerts or critical events. It uses push delivery mode and sets a batch timeout of 30 seconds.

 

In the next post we’ll be looking at the more robust setup using GPO for Source initiated Subscription scenarios.

Join us on our next Wintalks event 6/02/2014 @t Microsoft where we will have Stijn Callebaut showing us all there is to know about Powershell Remoting

Click on the eventbrite icon to register for this great event:

 

image

 

Event Details

One of the most important features PowerShell offers is the ability to remotely manage your Servers both on premise and in the cloud. No matter if you are new to PowerShell management or have been a veteran for many years this session is for you!

We are exciting to announce our guest speaker Stijn Callebaut,consultant for Inovativ focussing on System Center, Powershell and other.

We hope you can join us for this upcoming event and encourage you to follow Stijn on twitter @StijnCa

Doors open at: 18u30
Event starts at 19u00

WinTalks

image

During the Christmas and New-year period I had some quiet time so I spent some time looking at  a topic I have had on my want list for a long time now.

“ Consolidated logging “ is the concept of bringing all your logs from multiple locations into one central logging system. Of course bringing all your logs to one location has no value if you don’t put a shell onto of it to make it easy to filter and search.

Many big companies already have this in place, however with cybercrime on the rise and budget being tuff this is also something small and midsize organizations should be looking at.

In a series of blog posts and a presentation I’m prepping for an upcoming community event I’ll guide you all through the process of getting a basic setup up and running.

If you need some heads up on why you should implement this and what the ROI is:

- It significantly reduces time to review log’s and generally you will learn more from what you see than when using de-central logging as this is often only used when things go wrong.

- It can give you a single instance view on a multi-tier application and thus also your time to troubleshoot.

- If you do it right, it will ensure you have an audit trail for forensics if you are ever victim of a cyber attack.

During this series I’ll be using a cloud based log collection system, there are a couple of reasons why:

- As with many things cloud services can significantly reduce you setup time

- Consolidated logging can require a huge amount of storage for you log collection and CPU power when filtering and searching through the millions of log records. Doing this in-house might not be the most cost effective way of doing it in a cloud world.

- It’s a great way to start using cloud other than traditional mail and file sharing. This is a good thing for IT departments certainly if it’s also non end-user intrusive.

- And lastly, it can reduce the chance of a hacker tampering with your log collector.

For the tutorial I’ll be using www.loggly.com *I have been using it for a number of weeks now and although it’s still not the same as a commercial or open-source system** that runs in-house, it has a zero cost trial you can start up with in minutes, has a very nice mail series after signup that learns you all the ropes in a number of automated tutorials you’ll receive and you can easily expand once you get your POC running and want to extend.

For a quick view on loggly: http://vimeo.com/73480120 

So, what will we actually be doing?

1) We are going to consolidate log’s from multiple windows systems to one windows system and forward it on to the loggly server.

2) We are going to forward linux log’s onto loggly

3) We are going to forward appliance log’s onto linux and onto loggly

4) We are going to forward application log’s onto loggly

5) We are going to use loggly to filter through our log’s, find stuff and create alerts for stuff we want to be notified on.

image

(*) Please note I have no affinity with loggly what so ever. I’m just using this service as an example and i happen to like the product.

(**) the loggly system at the moment has a number of nice parsers that reads your log’s and chops it up into logical structured date however the parser list is still limited compared to some other products and there is no way to add your own parsers at this time.

Hi everyone just wanted to let you all know I got my MVP ship renewed for 2014. I look forward to setting up some great community events again this year.

 

Tom

I ran into an issue today on a surface pro. For some reason it would just not turn on despite the unit being plugged into the power outlet. Question is, what do you do on a unit you can’t pull the battery out,…

MS has documented most of what you need to do.

https://www.microsoft.com/surface/nl-be/support/warranty-service-and-recovery/surface-pro-wont-turn-on

All the above steps however gave nor resolution.

Finally we got it to fire-up by resetting it using the reset, hold down the Volume-button + Power button.

I have been a big fan of Microsoft's RDP technology since day one. Some how they just always seem to add new features that really make our life easier and the overall RDP experience better and better.

With 8.1 they put a lot of effort into enhancing the client experience towards multi screen, resolution changes,… all things i have been confronted with myself the past few months since moving to a hybrid laptop workspace.

Read this to learn more http://blogs.msdn.com/b/rds/archive/2013/12/16/resolution-and-scaling-level-updates-in-rdp-8-1.aspx#comments

 

http://blogs.windows.com/windows/b/extremewindows/archive/2013/07/15/windows-8-1-dpi-scaling-enhancements.aspx

With the way windows 2012 is neatly integrating direct access and MS moving more and more into the direction of constant connectivity to the corp net it’s not surprising that after TMG now UAG is being removed from the assortment of Forefront Products.

Mainstream support will continue through April 14, 2015, and extended support will continue through April 14, 2020.

http://blogs.technet.com/b/server-cloud/archive/2013/12/17/important-changes-to-the-forefront-product-line.aspx

If you have a windows 8 tablet and use one note you will want to read this post with the new release features.

 

http://blogs.office.com/b/microsoft-onenote/archive/2013/11/25/a-big-onenote-update-for-windows-note-taking-devices.aspx

Did you miss your WinTalks event but want to have a look at the slide deck? It’s available online now on our website’s download section:

image

www.wintalks.be

If you have not done so, we still have some room for our Next WinTalks event on Thursday.

Register now!!

 

image

If you enjoy Nmap but forget those useful command options each time you need them, than this great cheat sheet from pen-testing.sans.org might be just what you where looking for:

http://pen-testing.sans.org/blog/pen-testing/2013/10/08/nmap-cheat-sheet-1-0

I posted a while back on my Dell Latitude 10 tablet promising I would give you feedback on how I was doing. The answer is GREAT.

The device has truly changed the way i do may daily tasks and is enabling me to do things i could not to before on my classic laptop. My daily job basically requirs me to do 2 things.

 

1) First I need to get connect to client 2 site vpn’s and remote manage infrastructures over ssh, powershell and rdp. The latitude is perfect for this as it’s a full featured windows supporting all the networking tools i need to remote manage. I don’t run many tools natively off the device as my work is in and out of datacenters. Once I remote control i harness the power of the remote systems to do the heavy duty work.

There is no more need for me to have a huge laptop with many VM’s as labs,… as I have very much shifted my working into IAAS cloud, more on this later.

2) Secondly I hop in and out of meetings. A great majority of my meetings have now shifted from face to face, to remote video conf. I’m sure many of your businesses have done the same of will do so in the next year. It just makes sense from en economic perspective and it makes live more enjoyable enabling any time any where working environments. A huge aspect of these meetings are design technical meetings that require met to visualize thoughts or make complex IT structures simple to understand for executives. The best way to do this has always been to build up the story through white boarding. Helps the attendees see the different components of a solution and ends with the broad overview. On my classic laptop I was  using I had lost this powerfull medium, however with the Latitude 10 and OneNote this all came back and better then ever.

OneNote and the digitizer pen capabilities of the Surface pro, Latitude 10,… must be the two most underappreciated elements every. They really make writing on a digital device possible at a level we could only dream of. It’s 100% integrated, simple to use and just works.

Before the meeting I head out to outlook to create a OneNote from the meeting request. This injects the meeting info directly into the OneNote file ensuing i have all the meeting details in the Note. When the meeting starts I use the device flat on the table and the digitizer pen with OneNote app. To ensure optimal audio quality for myself and if I have some colleagues with me in the room I often connect a Jabra Speaker 510 series to the tablet.

WP_000359

My Note files are stored directly on Skydrive. After the meeting I head out to the one Note Classic application, let it sync the notes down from skydrive and convert the notes to PDF sending them out to the attendee’s within minutes.

And not only does it help my white boarding and video conf meetings, it’s also replace the laptop + Echo liveScribe pen solution i used to take with me. Call me old school but writing down things with pen / digital paper just helps me memorize things and triggers my brain, while typing is more an input and forget experience for me.

Now to all good stories there must always be a BUT story and i explicitly put it in the picture above. With paper and pen you never used to need to worry about will it work. A tablet solution is more like a cell phone solution, you really have to think about power. With a laptop it used to be a breeze to just plug in the power and continue working. With tablet based devices the power cord very much hinders your writing and working. Even more, we have been Dell customers for years in the latitude series and the chargers could be interchanged. Now I’m left with only one or two chargers, if i forget to charge or a charger i can literally be left dead in the water. Many of you may smile now but it is an important thing to think of when you move to the mobile computing world. Just as with my cell phone i plug in the tablet every evening and I need to remember to take my power cord with me or I could run into trouble.

In any case if you want to go more paperless, have a look at a tablet with a digitizer pen and OneNote, you just can’t compare it to the Ipad with stylus solution. The pro tablets have a separate touch and digitizer technology built into the box. This makes it possible for you to rest your hand on the device while still being able to write. This in combination with the fine tip and fast response of the  digitizer pen gives you a clean and responsive experience no other device can give.

Hi Everyone, If you have not seen it i’m happy to let you all know our next WinTalks event is on the board. We are getting together with as many IT pro’s as possible in the Brussels are on 24/10/2013 to talk about Patch management.

No matter if you’re new to Patching or a long time veteran, patching is a key element of your security strategy and system / application lifecycle. During the session we’ll focus on this topic broad topic and try to spark you all with some creative thoughts, but more than that we hope to engage attendees to openly discuss their success and pain full stories.

The event is free but you do need to register upfront.

image

As stated in my previous post during my holiday i’m playing around with win2012 R2 and working my way through the Security book end to edge. One very important part of security has always been Cert’s so for sure this needed to be part of my new demo environment. While reading i ran into an important side not that many of you might already know and others might need to know.

2012 CA’s are set to encrypt CA PRC communication by default. Win XP clients are to old to know what to do with this.

So if you plan on using 2012 Cert’ you will need to:

Option 1) Upgrade those XP’s to win7

Option 2) Disable the encryption on the 2012 CA (not recommended ofcourse)

more info: http://blogs.technet.com/b/instan/archive/2012/11/12/xp-and-w2k3-clients-are-by-default-unable-to-enroll-from-w2k12-ca-servers.aspx

More Posts Next page »