DrupalCon Portland 2013

Cómo guardar todo en el log sin volverse loco

Brian Altenhofel  · 

Transcripción

Extracto de la transcripción automática del vídeo realizada por YouTube.

okay I guess we'll go ahead and get started my name is Brian Alton hovel and I like lots of data how many of you all like data like seeing lots of pretty graphs do you like seeing pretty pictures in your presentations because I don't really have any

um and then up there there's a few ways to contact me if you ever need to I'm veggie meat on IRC I am in just about every Drupal channel connect to it by proxy so I'll always get the message connect with me on Twitter send me an email whatever

a little bit about me I have to Drupal businesses actually one doing development which is vm dough and the other one I do hosting and that's in rising and that's also where I started doing a whole lot of a centralized logging I've worked with Drupal

since a 2008 and like I said I like lots of data also like automation it's very rare to find me without putting a task in Jenkins so who knows what what what that is yeah that's just a typical engine X or patchy access log you ever tried to go in a

you've got an issue on your site and you need to find out what's going on right so you hit the log files well typically you're going to just grab it maybe send it through och a little bit of sad or whatever just to just to find out certain pieces

of information that you need if you don't have a lot of traffic you might just hit tail on it but if you've got three million hits a day that's not going to work one way to go through this log like I said would be like what's up and in this

case we're trying to get how many times content was accessed between 3am and 350 9am server time can you remember how to do that every time that you need that information if you wanted to have some fun with it you could maybe do some pearl you could throw

in some other regular expressions I've had one liners or maybe they should have been multi lines that were alias that would take up that entire screen and I'm sure all of you have had that before too so that's kind of brings us our problems with

a conventional logging if you've got one machine great you can go through it that way what happens whenever you get say 40 web heads or say that you've got multiple database servers or you've got multiple file servers you're needing to find

the source of a problem maybe it doesn't occur on every single one of those servers maybe there's something slightly different because your configuration management didn't work right conventionally you would need to go to every single one of those

machines and access the logs that that doesn't work with 40 or 400 or what if you you can't really have your technical support people a lot of them won't be able to go through those logs and find out if there's a problem so let's say that

you had a email issue customer calls says that I'm not getting any emails so the customer support would ideally be able to okay let's go look at an email see if it's giving up any errors or anything like that and the conventional logging you won't

be able to do that you have to have a sysadmin or an ops guy or some other technical guy basically be a keyboard for that person and because of that it ends up taking a long time to fix problems and we all know that in customer support the customer wants their

problem fixed yesterday so the solution to this is a centralized logging with centralized logging you basically are shipping on your logs to a central place and in this case what I'm going to talk about is shipping them from your servers through logstash

which is the cute blog guy in the middle and having them indexed by elasticsearch and what that what that helps you be able to do is you can go back weeks from now or whatever and you can search through your logs just using standard apache Lucene queries you

can do that you can limit it to certain time periods you can find trends it makes life a lot easier especially if you've got a website that's generating tens or hundreds of thousands of log messages a second so what is log stash well log stash is a

is a java that it was it allows you to receive messages but it can also parse them and then you can send them to wherever you want so if you want to send it to elasticsearch you want to send it to pagerduty you want to send it to STATS d and graphite you can

do all that it's a lot like using unix pipe or using t except you get a whole lot more parameters and at so it's a lot easier to configure than that if there is not a plug-in available to ship it someplace they're actually should be pretty easy

[ ... ]

Nota: se han omitido las otras 2.240 palabras de la transcripción completa para cumplir con las normas de «uso razonable» de YouTube.