Brute force attack on wordpress might bring it down because password validation is hard

There were several discussions about how brute force attacks against WordPress can bring sites down. I have to admit that I didn’t believe that as I never seen anything like that happen and I could not think of any reason for it.

On the face of it handling a login request is very similar to handling any other page request with the small additional cost of one query to the DB that gets the password to authenticate against. Oh boy how wrong I was.

First  it turned out that because of the way the internal data structures are organized, WordPress will try to get all the data associated with the user being authenticated which means that there will be at least two queries instead of one, and it seems like the total time querying the DB doubles. Then, you get into the password validation process which according to security principals is designed to be slow mathematical computation. The mathematical computation is what demands the CPU to work harder which at the end might bring down sites.

I still find it hard to believe that a properly administered site will be brought down that way but at least now it makes theoretical sense.

There is one important thing that I observed while investigating the issue. When trying to login with a user that do not exists the CPU cost is about 20 times lower then the CPU cost when trying to login with a proper user (very non scientific measurement), but the interesting thing is that when trying to login with a valid user, it costs the same if the password was authenticated or not.

Which brings us to the point of user enumeration attacks against wordpress and why core developers are doing a mistake by not addressing it. If it is hard to guess the valid users, an hacker will try all kinds of user/password combinations and it seems like there is a very big chance that most attempt will be against non existing users which are “cheaper” to handle, but if there is an easy way to find out what are the valid users, the attacker will direct all attempts at those users and even if they fail they do cost relatively a lot of CPU to handle.

Sounds like until the core developers will get a grip, an user enumeration prevention plugin is a good thing to have.

 

Why do WordPress plugins stop being maintained? I guess because of the freeloaders*

*freeloader – someone that uses a free thing without giving back in any way

The problem with getting a software for free, just based on its price, is that the cost of switching to something else is high. Most likely a new software, even if it does exactly the same thing, requires to at least be configured as its DB/configuration files are not compatible with other software and there is no migration utility, and then of course you need to test it to make sure you have duplicated everything correctly.

In a very simplistic way a decision to use a specific software is kind of getting married with the software developer. You might be getting married because that is the easiest way to have available sex, or to get rid of social pressure, but after some time you start to develop dependency on your partner and gain common assets that are hard to divide (kids, house, etc). You might be flirting with some other people but usually the cost of divorce is just too high.
Stopping using a software is like divorcing the developer, the cost is usually high.

Since you don’t want to get to a point were a divorce is the only way forward in a relationship, you are being attentive to your partner because everyone that passed puberty knows that in the real life in order to receive you have also to give.
WordPress users apparently have not passed puberty yet.

The attitude projected by users to the plugin authors is one of entitlement.  Users want the plugins free, want them bug free, want them to perfectly match their own unique needs, want their support questions to be answered in a timely manner and want them to be maintained for ever, stay compatible with every new WordPress version.
The first item – “free”, is in real life contradictory to all the others.

Some people think by mistake that plugin developers can eat their ego, but unfortunately they also need some bread from time to time. 5 starring a plugin is nice but doesn’t help to bring bread to the table therefor forcing plugin developer to look for some real day job that doesn’t leave them any will and energy to spend on maintaining their plugins.

Login LockDown plugin is compatible only up to 4.0.8 and was not updated in a year. It has 200k active users. If every one of them paid one time only a fee of 1$, the developer could have spent 2-3 years doing nothing else but caring to the users of the plugins.

Limit Login Attempts is compatible up to 3.3.2, has not been updated in 3 years but still have more then 1M active users. Now just imagine how much time the developer could have devoted to the plugin if each user donated 1$ to him.

I actually got curious about the later plugin and tried to “track down” its developer. From what I saw on his twitter stream he is doing very well. Obviously he neglects the plugin out of his free will, I guess it is because he can’t eat the 5 star reviews.

People are cheap (news at eleven), me included, but giving some monetary incentive to the developer to keep developing their plugins is good for the users.

In a way this is not only a problem of people being cheap, it is also a problem of the repository for not promoting donations to actively used plugins.

The end result is that the plugins that are in the repository are more of a demo for the real plugin for which you actually have to pay. This is bad because not all the developers that do this have the ability (or will) to run their own plugin update service, leaving the users with no notification about security updates.

Maybe donations of 1$ each are not practical, maybe what is needed is some way in which the repository (or more accurately, the WordPress foundation) sponsor the maintenance of high profile plugins.

 

update_option will not always save the value to the DB

Yes, I am getting to the point in which I start to call the WordPress core team members “idiots”, at least between me and myself.

Case in point is https://core.trac.wordpress.org/ticket/34689 which is about update_option not always saving values to the DB because it checks the value returned by get_option and performs the write to the DB only if that value is different than the one update_option is requested to save.

Actually sound very logical, right? If the values are the same, what is the point of wasting the resources to write to the DB. The problem is that the value that get_option returns is not the value stored in the DB as several filters might be applied to it, therefor in some situations the value returned by get_option might be the same to the one passed to update_option but still different then the one in the DB.

So why no one had noticed it so far? I think that most people are not aware that you can filter the result of get_option on the one side, and on the other most update_option are made in admin in which the filters mentioned before will not be set as they are useless on admin side.

It is surprising to discover such a bug in one of the lowest level functions WordPress has, a function being used by almost every plugin, but it is just shows that in software when you don’t know about bugs, it doesn’t mean there aren’t any and no matter how battle tried the software is.

What is annoying is the refusal of the core team to admit that it is a bug. In software development there are all kinds of situations in which bugs are results of bad design but once it becomes old enough it is hard to fix it because by then everybody expects it and therefor it becomes a feature. But when a = b do not make a==b true, there is just no way to pretend it is not a bug.

wp_is_mobile is a trap waiting to bite you, just avoid it

What can be wrong with a function that just checks the user agent to determine if the user is on a mobile device like wp_is_mobile? even if the function works as promised the whole idea of using the user agent to detect the type of device on server side is wrong.

Using that function (or any server side detection really, but I focus on wordpress here) violates the core principal of responsive design, that you serve the same HTML to all users.

In practice you will run into trouble once you will want to cache your HTML and then you will start to sometimes get the mobile version of the site on desktop and vice versa. The “nice” thing here is that by that time the original developer had moved on and there will be someone new that the site owner will have to recruit in order to fix the resulting mess. Pros just don’t do that to client.

What is the alternative? Detect whatever needs to be detected using javascript at client side and set a class on the body element. What about people that turn off JS? I say fuck the luddites, let them have a desktop version on their mobile. OK, strike that, make your CSS mobile friendly as much as possible just don’t worry about the UX of the luddites.

Using Jetpack reduces the raw site performance by up to 20%

It is obvious that the jetpack plugin has a bloated code since no one is likely to use all of its modules,¬† but the interesting question is what is the actual impact of the bloat on the site’s performance.

According to tests done by [art of the jetpack team itself, having jetpack active, without even doing anything will delay the time to first byte (TTFB) by about 70 miliseconds taking 470ms instead of about 400ms.

The point here is not the numbers themselves, as it is not very clear what is the setup of the test, but the fact that it is something that actually going to be noticeable, something that actually requires more server resources.

I assume that the problem is with the time it takes for the PHP interpreter to read and interpret the jetpack source files. People that host on VPS can select a host with faster disk access (SSD), have better control of file caching done in memory by the OS, and cache the interpreted code, can and should optimize PHP interpreting aspects. People on shared hosting are just out of luck.

And the usual caveats regarding any performance related discussion apply – if you have most of your pages served from a cache then the impact performance degradation in generating one page is probably much less important to you.

There is no much point in setting a minimal file size to deflate

Nginx has an directive called gzip_min_length which you can use to instruct it to not bother trying to compress files under a certain size. I spent few hours searching for an apache equivalent setting just to realize

  1. gzipping and deflating, while based on the same compression technology differs in the generate output, especially overhead. By the names of the settings it seems that deflate is the preferred compression on apache, which has also a gzip module, while nginx can only gzip.
  2. For the cost of extra 5 bytes, deflate will send a file that it fails to compress just as it is. For small JS and CSS files, especially after minification, the likelihood of getting a smaller file by compressing it is small, so you will not end up wasting bandwidth instead of saving it unless you are really unlucky (since in the end data is sent in packets of 1k+ bytes in size). Still you waste some CPU cycles for even trying to compress, but since we are talking about small files it should not be too bad, but it would have been nice to have a method to signal apache not to bother (hopefully the compression code does it, but I don’t see any documentation for that).

Will the use of HipHop VM (HHVM) help with making your wordpress site faster? unlikely

Been a while since I last heard about facebook’s HipHop PHP optimizer project. First time I have heard of it it was a compiler from PHP to C, something I have already ran into with another interpreted language – TCL/TK, and is mainly beneficial for projects that once the interpreted code (Iie PHP code) is stable and shipped there is no need to modify it. In other words you lose the ability to modify your code on a whim that is the reason why most sites today use interpreted languages.

I was actually surprised to learn that the main reason facebook was unhappy with the compiler is that the deployment of a compiled code was resource intensive and since facebook is pushing a new update once a day they started to look into other alternatives to compiling their code into machine code.

The approach they are trying now is to write their own PHP interpreter (and a web server dedicated to running it) which will use JIT (Just In Time) technology to compile PHP code into native code and execute it. As JIT proved to be a very efficient technology when applied to optimizin javascript which like PHP is an interpreted language, I find it easy to believe that it executes PHP code faster then the conventional interpreter.

But if it is faster, how come it will not make your site faster? To understand this you need to keep in mind how facebook’s scale and how it works works.

Facebook had at some point 180k servers A 1% optimization will allow them to save 1800 servers and the cost of their electricity and maintenance. My estimate based on pricing by web hosting companies is that this might amount to saving 100k$ each month. So facebook is more likely doing this optimization to reduce cost and not to improve side speed, but for lesser sites a %1 optimization will not be enough to avoid the need of upgrading your hosting plan and even if there was a cost benefit it is unlikely that for most sites the savings will be worth the amount of time that will need to be invested in changing to use HHMV and testing your site on it, especially since it is not a fully mature product yet (just because it works for facebook doesn’t mean it works everywhere)

The other thing to take into account is that by its nature facebook can do a very limited caching as essentially all the visitors are logged in users. They can still keep information in memory in a similar way to how the object caching in wordpress works, but they still need a PHP logic to bring it all together, while wordpress sites can use full page caching plugins like the W3TC plugin which produce HTML pages that serving them bypasses entirely the need to interpret the PHP code and therefor improvements in PHP interpreting is of very little importance to those sites.

It is not that HHMV is totally useless outside of facebook, just that its impact will be much bigger on bigger and more complex sites then most wordpress sites tend to be. The nice thing about it is that it is open source and therefor the can adopt the PHP JIT techniques from HHVM into the core PHP interpreter.

The importance of the priority when using the wordpress authenticate filter

have wasted two days wondering that had gone wrong with my plugin that is doing a small extra authentication because I didn’t feel like diving deep into code to figure it out, but once I did I got the answer really fast – the authentication filter has some unexpected weirdness that is unlike almost all other wordpress filters.

It is supposed to return a valid user but the initial value passed into it from the wp_authenticate function is NULL, and not as you might a valid user or error. The actual user validation is done by a core filter with a priority of 20. There is also another core filter with priority 99 that is denying login to users that were marked as spammers.

bottom line: if you want to implement a different authentication user/password scheme you need to hook your function on a priority which is less then 20. If you want to just enhance the core authentication use priority 21-98, and if you prefer to let wordpress reject network spammers before your function is called use priority of 100 and above.

The idiotic change in fancybox license emphesizes why developers should leave licensing to lawyers

fancybox is a jquery based lightbox alternative. Its version 1.0 was distributed under a very permissive MIT license, but for version 2.0 the developers apparently decided to try to monetize their success and changed the license to Creative Commons Attribution-NonCommercial 3.0, which basically doesn’t allow usage for commercial purposes.

I am all for people getting payed for their work especially when it is so successful, but was the license change the smart thing to do? I think no

  • while the wordpress world shows that you can make tons of money from offering GPL software, with several themes and plugin developers doing nice amount of money from their work, it is strange to see someone trying to go against the tide.
  • Noncommercial¬† – is meaningless term, as almost no one put the effort to make a nice site without expecting to monetize it in some way. It might be direct as a shop site or running ads, or less direct as a site to build reputation. This is basically a problem with most CC licenses as they are not intended to be used for code, this is something in which a lawyer’s advice might have prevented.
  • How are they going to discover that anyone had broken the license terms, and if they do, they are unlikely to have the money to sue people all over the world.
  • What incentive is there to not pirate the code? Pirating is very easy and they don’t offer any additional service like support, therefor only people that would have been willing to “donate” in the first place will be willing to pay for the license. It might even be that they might have been willing to donate more then the request price.
  • It is easy to circumvent the license by placing the JS file on a different domain which is truly non commercial and use it in the main domain.

We can’t know how many users this change had cost to the developers, but by the look of the site I assume the monetization scheme didn’t work too well for them. Maybe it is time to change the license to something less restrictive.

Every user that had loaded any page of your site is your user

I found that I am annoyed with the way wordpress classifies users, there are administrators, editors,authors, contributors and subscribers. This classification is based entirely on what can the user access on the wordpress admin, but most people that use you site don’t have an account and therefor they are not classified at all, which is a big mental mistake.

Users without an account can be

  • casual readers – access your site at random intervals
  • follower – reads every new post or checks you site every week
  • commenter – leaves a comment
  • rss subscriber – follows update in rss
  • email notification subscriber
  • news letter subscriber
  • discussion follower – following comment updates via RSS or email.

And maybe there are more types. This kind of profiling your users should help you in monetizing your site while keeping all your users as happy as ossible.

For example, some sites don’t show ads to logged in users, treating them more as partners then source of income, but maybe it will be wise to treat commenter the same way?