Symfony Live London 2013

Actualizando una aplicación PHP legacy con Varnish y Symfony2

Craig Marvelley  · 

Presentación

Vídeo

Transcripción

Extracto de la transcripción automática del vídeo realizada por YouTube.

hello everybody thank you for come to the talk which is going about transforming a legacy PHP applications is symphony and varnish my name is Craig my belly I'm a software developer at a company called box UK work in Cardiff and we are technical partners

with a company called Korea's wheels which are a organization that aim to get people in Wales into employment right from Stockholm School where kids can manage the set of the lessons that they enroll in GCSE and a-level right through University and into

our other employment of finding jobs there it's a big organization responsible for a lot of employment in Wales so it's a quite high traffic website looks a little bit like this it's a it's a CMS based website we built a CMS for them well the

oranges the CMS started remember 2000 2001 and it gradually got got refactored but the version they running on was built rundberg 2006 so seven eight years old page before based worked quite well but originally was built as a CMS for a website and as as the

as website grew and grew it became a lot more things I'm sure a lot of people are familiar with future cretin how that can happen and I started offer to be simple website involved into this application with mapping and API lockups and and all sorts of

really fancy sort of things that the original platform wasn't designed for and we reached a point where we couldn't go any further with the current with the current solution and this is the case in July 2012 go a graph here and hopefully train illustrate

what's going on this is a low test right against the current website as it was back in July 2012 and this simile did how 10 users would hit the home page log in as a verified user and then navigate to a page is displayed some dynamic up dynamic information

available only to them and by the time we'd ramped up to 10 concurrent users to 10 users hitting the website exactly the same time we were look in about 20 seconds average average run to do those three things hit the home page login access and dynamic

information so that's a hell of a long time and as many people aware of and that sort of timeframe is going to turn people off from using websites so we're in a lot of pressure to try and improve this to make this lee more complicated because korea's

whales are a body that work with schools and educational establishments we still have peaks in traffic better than this time of year now when kids going back to school start to look at their options and use a lot of the features of the site we sing we see

a lot of traffic which obviously tails off during the summer when children are in school so we have to be able to handle peaks in traffic and be able to scale that automatically so what we try to do is first one move to ec2 so we can automatically scale as

servers are using which improve things slightly but it was still slow and that's for these reasons firstly we were using page before obviously well past its sell-by date I don't know what many projects have started recently the use pitch before so

we were aware that we were sort of come to the end of how long it could feasibly maintain a page before code base the the back end for the site was using SQL Server that's because historically the organization always use SQL Server and they have other

organizations also access the data in SQL Server but because we were using that because page before an SQL Server and it was so hard to communicate unless we use Windows Server and every year also using Windows as a platform and IAS and page before is windows

SQL Server does not add it to a really quick framework really quick stack so that was thousand doing us and finally with a sub-optimal caching strategy the basic caching approach was that we take the response that came back and we'd hash it and then we

if we find a matching hash for that on a subsequent request we'd return it but it don't even work for a single response and if the response continued dynamic data that was specific to a user then we couldn't catch anything because we didn't

want to make sure they didn't have a return information to users that wasn't specific to them did want to share you busy kills around so the caching strategy wasn't as evolved and as as a as clever as it could be this was kind of what we had this

was our this is our this is a really simple server diagram basically a shedload of servers and that was it if the man went up if we needed to serve more requests we just threw more service at it and these are these are windows servers they're not expensive

and they're not inexpensive on ec2 and they're they're quite high and I think that the medium or large instances so can be quite expensive at times of peak load we knew me to change this around and we're mindful that we couldn't start from

scratch this is a really famous quote from an article by Joseph full ski where he said we write in the code from scratch is a single worst strategic make strategic decision a company can make he was referencing Netscape when they went from neski forward let's

skip six they don't rush abandon everything for like three years while they rebuilt the new version of the time they eventually got it out IE was so far ahead they couldn't recover so we were aware that we couldn't just sort of start again from

scratch and and just work towards a new version at some point in the future because if we did blend up in this or situation where it was all fighting to start with a closer closer we got to the end what scared we got and the time it came to go life we did

resolving to go horribly wrong and you're lying on the kitchen floor covered in your own vomit that was essentially what we were facing if we just tried a new thing from scratch and additionally we knew that we couldn't just hack things around and

just incrementally improve performance by small degrees this quote the quick fixes like quicksand a clarity of code goes down confusion is harvested in its place is a counterpoint to the saw from scratch approach and i found a gift to go with that looks something

like this where yeah it looks good at the start but then you quickly realize that yeah you're floundering it's not much you can do about it its identity I'll give you guys a good advice if you are ever tempted to google for people in quicksand

with safesearch off do not do not do it NSFW i'm going to save you that thing i could have got sacked for that so we know we couldn't star from scratch we knew those small fixes we're going to prove this refactoring the application was an option

but mindful that we're using PHP for on a Windows platform there's always going to be a ceiling to how good we can get that code and how fast we get that code and we knew it was unlikely we're ever going to get it faster find that stack so when

it is something a bit more out of the box to to try to fix this problem so the planet we came up with might feel that our end goal was to reduce costs because we were looking at last service at the moment just too used to maintain a reasonable ever performance

but we had to directly improvement for words by a massive amount we were so that that thing we were looking at 20 seconds for that and that round trip we need to get that down to at least 54 seconds so we looked at developing a new application framework for

this but the idea was that we didn't want to have the we did want to have any we don't to make users aware that they were dealing with two different platforms we had the old code and the new code I didn't want them to be there to be any point where

the user had a jarring experience as they switched from one to the other so we decided to try and enforce a gradual deployment to the koreas worlds as a domain so it even didn't matter if you went to careers com didn't matter which website application

for ended up hitting to the user it was completely transparent so to do that we need an API in order to access shared data so we'd have two stacks running side-by-side an API in the middle to share data between the two and then I think the Cylon solution

so the user can sign into this and only slide into what once into both the besides the application and then a session to share the user between the two so I said it will be as far as the users were concerned they would even know what they were working on two

[ ... ]

Nota: se han omitido las otras 4.238 palabras de la transcripción completa para cumplir con las normas de «uso razonable» de YouTube.