Blog RSS Feed Subscribe

Jordi Boggiano

Jordi Boggiano Passionate web developer, specialized in web performance and php. Partner at Nelmio, information junkie and speaker.


Composer: Installing require-dev by default

Jeremy Kendall started a small twitter shitstorm last night by asking why Composer's install command now installs the require-dev dependencies by default. Indeed until a few months ago the only way to install dev requirements was to run composer commands with the --dev flag. This was changed when the require-dev handling was fixed to be a lot more reliable, and the update command started installing dev requirements by default.

A couple months ago when releasing alpha7 I took care to note in the changelog that the install command would also start installing dev requirements by default in the next release. I did that change some weeks ago and now people started to notice.

The rationale behind the change is fairly simple, it's about consistency and ease of use. Consistency between the various commands which now all default to have require-dev enabled. Ease of use because in 99% of the cases, when you type a composer command by hand you should be doing so on a dev machine where it makes sense to have dev requirements enabled. The only case where you want them disabled is when deploying to production or other similar environments. Since those deployments should be scripted, adding --no-dev to your script vs having to type --dev every single time you run composer makes sense. I understand it may create some pain in the short run - although having dev requirements installed in prod is usually harmless - but I truly believe it is the right thing to do if you look at the big picture.

Jeremy also said that install is meant for prod, and while this is not a wrong statement per se, I would like to take the chance to clarify that install is not only meant for prod. Install should be used for prod for sure, because you don't want the prod server to run newer packages than those you last tested on your dev machines. But in many cases developers should also run install to just sync up with the current dependencies of the project when pulling in new code, or when switching to an older feature branch or older release to do a hotfix for example. Developers also might need to run install in some larger teams where only a few select devs are responsible to update the dependencies and test that things still work, while the other devs just run install to sync up with those changes.

And for those that are still not committing their composer.lock file, note that the above paragraph only applies if you have a lock file available in the project's git repository. If you are not sure what this file does please read more about it in the docs.

July 11, 2013 // PHP // Post a comment

Composer: an update on require-dev

Update: the install command now also defaults to --dev, read more about the rationale

Using require-dev in Composer you can declare the dependencies you need for development/testing. It works in most simple cases, but when the dev dependencies overlap with the regular ones, it can get tricky to handle. In too many cases it also tends to just fail at resolving dependencies with quite strange error messages.

Since this was quite unreliable, I set out to rework the whole feature this week-end. The patch has been merged, and it fixes six open issues which is great. The short story there is that it now does things in one pass instead of two before, so it should be faster and a lot more reliable. Also dev dependencies can now impact the non-dev ones without problems since it's all resolved at once.

Workflow changes

I took the chance to change another thing while I was at it. The update command now installs dev requirements by default. This makes sense since you should only run it on dev environments. No more update --dev, the dev flag is now implicit and if you really don't want these packages installed you can use update --no-dev instead.

The install command on the other hand remains the same. It does not install dev dependencies by default, and it will actually remove them if they were previously installed and you run it without --dev. Again this makes sense since in production you should only run install to get the last verified state (stored in composer.lock) of your dependencies installed.

I think this minor change in workflow will simplify things for most people, and I really hope it doesn't break any assumptions that were made in third party tools.

March 04, 2013 // PHP // Post a comment

One logger to rule them all

I called the vote on the Logger Interface proposal last week. When the vote ends next week it will become PSR-3 (since it already collected a majority). The fourth recommendation from the PHP-FIG group, and the first one actually including interfaces/code, which is a great milestone.

You can read the proposal if you have not done so yet, but I wanted to discuss the goal and long term hopes I have in more details here.

Where we come from

Most PHP frameworks and larger applications have in the past implemented their own logging solutions and this makes sense since I think everyone recognizes the usefulness of logs. Traditionally most of those did not have many external dependencies, established libraries were few and far between. Having no logging capability in those was not such a hindrance.

Libraries deserve logs too

Yet in the last couple years, thanks to GitHub allowing easier sharing, composer allowing more reusability, and mentalities shifting slowly to a less-NIH approach, we are seeing more and more libraries used in applications and even by frameworks themselves. This is great, but as soon as you call a library you enter a black box and if you want anything to show up in your logs you have to log yourself.

The availability of the PSR-3 interface means that libraries can optionally accept a Psr\Log\LoggerInterface instance, and if it is given to them they can log to it. That opens up a whole lot of possibilities for tighter integration of libraries with the framework/application loggers. I really hope library developers will jump on this and start logging more things so that when things go south it is easier to identify problems by looking at your application logs.

Take a deep breath

I am sure people will have questions or complaints regarding details of the interface itself, but I hope this helped you see the broader benefits it brings.

December 13, 2012 // PHP // Post a comment

Encouraging contributions with the Easy Pick label

One of the barriers to convert users into contributors in an open-source projects is that many people have no idea where to start. They are usually scared to take on large tasks because they are not comfortable enough with the code-base. Yet I think there are ways you can help them as a project maintainer.

One good way that I found to fix this is to tag specific issues that are a good starting point for new contributors. However I think the practice would be even more effective if more projects did the same way, so that people know to look for it.

The way I do it is using a custom Easy Pick label to indicate issues on GitHub that are just that. Easy to pick up tasks, either because they are small in scope, or just don't involve much in-depth knowledge of the project.

The result of this is a much clearer view of issues. On the Composer project for example if you go in the issues tab and then filter by "Easy Pick", you end up with 14 issues listed instead of 170. A much more manageable amount to look at and pick from, and you are empowered by the knowledge that those should all be reasonably easy to work out.

I have also created this label in the Symfony2 issues a while back. As you see both use the same wording and the same yellow that's one of the default colors on GitHub.

I would love to see this spread because I have already seen it bring in a few new contributors. So if you feel like encouraging people to join in on your project give it a try. And if you feel like giving back this week-end, browse the issues of the projects you use and enjoy. See if you find anything you can help them with.

November 16, 2012 // PHP // Post a comment

I'm going nomad - introducing Nelmio

After almost three years working at Liip, I have finally decided to take the plunge and start my own business. Together with Pierre Spring, in early May we will start building up Nelmio.

Why? To keep it short, Liip is a great company to be employed at - and they're hiring - but both Pierre and I have had the urge to be our own bosses for a while, and that is something that's hard to suppress. Eventually we had to give in.

What next? We're both web devs, with lots of experience across the board. Pierre is probably more into JavaScript and the frontend side - along with web performance optimization. I'm a big Symfony2 contributor and more into the PHP/backend side of things, but we are both quite knowledgeable in all those technologies, they're merely complementary tools after all. Anyway, we'd love to share some of that knowledge, and will be available for some consulting/code review/coaching fun. We're based in Z├╝rich, Switzerland, let us know if you need us.

You can find Nelmio's site at - it's not complete yet but it should give you a good overview already. And in any case you should of course follow @nelm_io on Twitter to get more news soon :)

April 17, 2011 // News, PHP, JavaScript // Post a comment

Terminal (Bash) arguments tricks

Reading David DeSandro's last post on how to store strings in variables in terminal, or any bash-y shell (I'd say any unix shell but I'm sure there is a weird one out there that does things differently) for that matter, it struck me that many web developers seem to have a big disconnect with the shell.

Now I'm no expert, but I know that the use case he describes can be solved much more efficiently, so I felt like writing a little follow-up, and hopefully teach you, dear reader, a thing or two. The short story is that you sometimes want to do many operations on the same file. Now the neat trick to do that is to use history expansion, which allows you to reference one of the parameters from the previous commands you typed.

As always with unix stuff, it has simple useful basics, and then it can get really hairy. Here are a few examples, from most commonly useful to those things you just won't remember in five minutes.

# First, the example from DeSandro's post
# !$ references the last argument of the previous command.

mate _posts/2011/2011-04-12-terminal-strings.mdown
git add !$
tumblr !$

# Now more complex, let's copy the second argument
# !! references the last command, and :2 the second arg. 

echo foo bar baz
echo !!:2 # outputs "bar"

# Batshit crazy
# !?baz? references the last command containing baz, :0-1 grabs the two first args

echo !?baz?:0-1 # should output "echo foo"

Now if you've been paying attention, the second example had !! in it that referenced the last command. This one is really useful for all those times you forgot to sudo something. Just type sudo !! like you really mean it, and it will copy your last command after sudo. It does not work if you add cursing to it though.

So read up those history expansion docs, it's really worth if only to know your options, and if you know other related tricks, please do share in the comments.

April 13, 2011 // PHP, Web, JavaScript // Post a comment

Speaking at ConFoo 2011

I recently had the pleasure to hear that I would be speaking at the ConFoo conference. This is a great opportunity for me as I'll finally be able to meet a few US-based guys from the PHP community that I have only ever met virtually.

Besides that the conference itself also looks great, and covers an insane amount of topics with almost 150 sessions in 3 days, and boatloads of speakers. I will give two talks, one about JavaScript Scopes, Events and other complications to try and illustrate why people really ought to learn JavaScript better. The second will be about frontend web performance, i.e. how to perform well in the browser. It will include a short overview of old classics and then a couple new topics such as the Web Performance APIs and some more advanced optimizations that are hopefully not too widespread like CSS selectors and DOM reflow issues.

So I hope to see you all in Montreal for what promises to be a huge conference.

December 28, 2010 // PHP, JavaScript // Post a comment

Speaking at Symfony Live 2011

I have the pleasure to announce that I will be speaking at the upcoming Symfony Live conference (Paris edition).

I've been working with and on Symfony2 for a few months already, both in my spare time and at the office - thanks to Liip, my employer, I can work on Symfony2 patches during office hours :) - and I must say it's really nice to work with already, so if you don't have time to check it out yourself you should come, and if you do have time, then you should still come to share your experiences with other users or newcomers, I'm sure we'll have a great time.

It should be relevant to anyone with any sort of interest in PHP frameworks by the way, since the brand new Symfony2 framework should be released in its final version during the conference and several sessions will be focused on it. It's a rewrite from scratch so no symfony1 knowledge is required, but if you use symfony1 already you should really come to learn about the framework's future.

Now with all that said, my talk will not actually be about frameworks, Symfony or PHP! It'll be about JavaScript and why you should learn it, which I'll try to demonstrate by talking about Events, Scopes and other little things that most PHP devs like to ignore but should not. JavaScript code is present in most sites, and most PHP developers will have to write some sooner or later, so you might as well learn how to do it right, and avoid creating an unmaintainable mess in the frontend, now that we've gotten out of the PHP-spaghetti era.

You can find the full schedule on the conference website, and don't forget about the last day (hack day), I'm sure that will be an excellent opportunity to talk to people that have been using Sf2 for months, or just ask more questions to speakers you missed during the tighter schedule of the conference days.

See you in Paris!

December 15, 2010 // PHP // Post a comment

ESI - Full page caching with Symfony2

Launched about a month ago, runs on the Symfony2 PHP framework, which is still undergoing heavy development but is already a great framework.

Full page caching basics

Don't get me wrong, the framework is fast, pages are rendered by our fairly modest server in 40-50ms on average, so it hardly needs optimization. However I still wanted to try and squeeze more speed out of it, and also get a chance to play with cool stuff, so I decided to implement full page caching with ESI into the application.

The way this works is that you typically install some reverse proxy like Varnish, which will sit between the web and your http server. More complex setups might include another http server in front of varnish to gzip output but I won't go into details on that in this post. The purpose of the reverse proxy is that it will cache the output of your application, for as long as you specify in your Cache-Control header. Once a page is cached, it will just return the output to the clients straight, without ever hitting your http server, php, or your application. Needless to say this is ideal for performance. Symfony2 is a great match for this type of cache because it's supported natively as I'll show, and it also implements a reverse proxy layer in php, that can be used for development or on hostings where you can't have access to Varnish. It acts just the same and is automatically turned off if an ESI-capable proxy is added in front of php.

ESI awesomeness

Of course the issue with caching the entire output is that most sites have areas with dynamic content, especially when users are logged in. This is where ESI comes into play. ESI stands for Edge Side Includes, and is a standard that defines a way to tell reverse proxies how to assemble pages out of smaller bits, that can be cached for various amounts of time, or fully dynamic.

So if you take for example an event page on techup, you have two user-dependent areas, the "login with twitter" button, which turns into "@username" once you're logged in, and the "attend" button is also showing attend or unattend depending on the user viewing the page. Those two areas are ESI includes. What this means for the reverse proxy is that it will first try and fetch the main page content out of its cache, and if found, it will then process the <esi:include src="http://..." /> tags that it finds. Those tags contain the url to a sub-component of the page. So one url will point to an action in one of my controllers that only outputs an attend button, green or red depending on the user viewing it. The rest of the page is still taken out of the cache.

Each of those sub-components have their own Cache-Control header, which means that you can composite a page with various components that expire after various durations.

The way this is done in Symfony2 is pretty straightforward. Your controller actions must always return a response object, and all you need for the reverse proxy cache length is to set the shared max age of the response - beware, max age will apply to the entire page, so you really want to use the shared variant. It's as simple as calling $response->setSharedMaxAge(3600);, 3600 being the length in seconds.

In your templates, if you use Twig, and you really should with Symfony2, it is also quite easy to define an <esi:include /> tag. You call out the controller/action that you want to execute, give it some parameters, and specify it as being standalone which means it's an ESI include, for example {% render 'FooBundle:Default:attendButton' with ['event_id':], ['standalone': true] %}. For more info on how to set that up feel free to go read the Symfony2 docs on the topic.

Invalidation woes

The tricky part, which is also a slightly controversial topic, is invalidation. In theory if you say that a page or sub-component is cache-able for X seconds, you should just live with it and let it be cached, even if the data changed. Now this is an acceptable downside on really high traffic sites, or in cases where only admins publish content and it doesn't really matter if it takes a few seconds/minutes to appear to the end users. But I like to give our users feedback when they add or change data, and I think they should see it straight away, so I decided to invalidate the cached pages in the proxy whenever the data is modified.

I will refer you to the docs as to how to actually setup support for purging (invalidating) caches in your proxy of choice, no point in repeating it all here, but what I want to share is the approach I took on actually managing invalidation. As you may know, invalidation can quickly get very tricky to handle. So what I did is just built centralized methods that contain all the invalidation logic for one domain model. When that model changes, it's passed to the matching method and all the urls that will render it are purged. This at least allows you to keep a good overview of the pages that are affected, and gives you a single point of entry to make adjustments to those invalidation rules.

// src/Application/FooBundle/Controller/FooController.php
protected function invalidateEvent($event)
    $args = array('event' => $event->getId(), 'title' => $event->getSlug());
    $this->invalidate('viewEvent', $args);

protected function invalidate($route, $parameters = array())
    $url = $this->router->generate($route, $parameters, true);

    $context = stream_context_create(array('http'=>array('method'=>'PURGE')));
    $stream = fopen($url, 'r', false, $context);

This example implementation will do a PURGE request to the site URL. This only scales if you have one single Varnish instance though. I assume you must do a PURGE request on each if you have a redundant setup, but in this case it might become cleaner to use an external job queue like Gearman to execute those outside of php.

There are a few gotchas you should consider, especially if you use the Symfony2 reverse proxy and not Varnish. First of all one thing that is fairly obvious is that you must prevent anyone from purging stuff, otherwise attackers could DDoS you with PURGE requests and make your load skyrocket. The second issue is that if you return a 404 code for "Not purged" a.k.a the page wasn't cached, fopen() will throw a php warning, which is really not that nice. For this reason, and since I don't want to care whether the purge happened or not for now, I chose to just respond always with a 200. It could be handled nicer with curl though, if you really need to have a proper response code to your PURGE requests.

// app/AppCache.php
protected function invalidate(Request $request)
    if ($_SERVER['SERVER_ADDR'] !== $request->getClientIp() || 'PURGE' !== $request->getMethod()) {
        return parent::invalidate($request);

    $response = new Response();
    if (!$this->store->purge($request->getUri())) {
        $response->setStatusCode(200, 'Not purged');
    } else {
        $response->setStatusCode(200, 'Purged');

    return $response;

The results

It sounds nice and all, but is it actually working?

I used JMeter to benchmark the site with and without reverse proxy. Note that I used the integrated Symfony cache layer and not Varnish, so the results would be even better with Varnish since it's written in C and doesn't have to to hit apache and php on every request.


/ => 63req/sec
/86/rails-hock => 100req/sec
/api/events/upcoming.json => 70req/sec
/api/event/10.json => 120req/sec


/ => 200req/sec *
/86/rails-hock => 230req/sec
/api/events/upcoming.json => 100req/sec *
/api/event/10.json => 800req/sec

* my 20mbps internet line was the bottleneck for those because they have too large response bodies

In short: Holy crap. Now for the two first pages tested, the improvement is "modest" because they include sub-components which are not cacheable, so they always require some full framework cycles. But the last one which is from the API is just amazing, with 8 times more requests processed per second.

All I can say to conclude is that this is worth playing with, and that Symfony2 really doesn't disappoint with regard to speed. If you have any experience with that kind of setup and want to add anything feel free to do so in the comments, questions are also welcome.

December 09, 2010 // PHP // Post a comment

Speaking at the IPC and WebTechCon

Next week the International PHP Conference and the WebTechCon will happen both in Mainz, Germany. I will speak at both events over the three days and the good news is that the combined 100 sessions are available for attendees of both conferences.

My only talk as part of the IPC is entitled Of knowledge sharing and the developer quality lifecycle, it's non-technical and will hopefully be more a seeded discussion than a plain presentation. We will talk about the ways to share knowledge within a company in the Gutenberg III room, monday at 11.45.

My second and third talks will be part of the WebTechCon schedule, but I think they are very good fits for PHP devs nonetheless. On tuesday at 10.15 as part of the JavaScript Day I will talk about JS Events and Scopes. Every web developer should understand those concepts so I would highly recommend you attend if you don't know how the this variable is bound in event listeners, or have never heard of variable hoisting.

The final talk will be part of the Web Security Day, and touching on the essentials of web security, the things you just can't afford to ignore. The talk is on wednesday at 9.00am however, so plan ahead and avoid getting too drunk if you want to attend :)

And finally, if anyone wants me to do some informal Symfony2 presentation, I got slides ready and would be very happy to do that, so just come and ask.

October 07, 2010 // PHP, Web, JavaScript // Post a comment

First page< Newer entries 1 [2] 3 4 Older entries > Last page