2022 was a monumental year for all of us here at Pagely! Over this past year, we’ve been hard at work; getting acquainted with our new team members at GoDaddy, influencing the whole organization on how we do things the Pagely way, and working hard behind the scenes on a no-compromise WordPress e-commerce solution. Here’s a rundown of what we’ve been up to in 2022 and what to expect as we enter 2023: Making New Friends Did anyone really think that we’d just get acquired and stop trying to change the status quo? Yeah, didn’t think so. We’ve been having a ton of fun with our new GoDaddy friends. Whether it’s collaborating on new opportunities, sharing experiences, or influencing the entire GoDaddy WordPress ecosystem, we’ve certainly made our presence known. What happens at #wcus stays at #wcus. #pagelyparty pic.twitter.com/tBje7U8TR4— Pagely (@Pagely) September 8, 2022 Of course, it isn’t all rainbows and unicorn shits. As you know, we never settle – we’re always chasing perfection. We set the bar high with everything we do, but in a corporate environment, that’s not always the case – there’s a lot of comfort with the status quo. Over the past year, we’ve been cutting through the red tape, hopping over guardrails, and picking fights with the biggest guy in the room. So far, it’s working. We’re showing them how we do things around here – how we move quickly with such a small team, all while providing our customers with a premium experience. There’s still a lot of resistance and we have a long way to go, but we’re proud of the progress we’ve made. Overall, the acquisition has given us countless opportunities to change the entire managed WordPress hosting ecosystem; as you would expect, we’re going full throttle on every single one of them. Stay tuned into the Pagely Twitter for the hottest of hot takes! Building an Open and Reliable Commerce Experience What happens when you combine the WooCommerce expertise of SkyVerge, the commerce capabilities of GoDaddy Payments, and the battle-tested reliability of the Pagely enterprise platform? You get the prime time, feature-stacked, end-to-end WordPress solution that is Managed WooCommerce Stores – an “existential threat“. In case you’ve been living under a rock, we’ll break it down: Managed WooCommerce Stores (MWCS) is a new GoDaddy product that provides a new end-to-end experience, changing how you do business online. Based on WordPress and seamlessly integrating with a multitude of services, MWCS empowers your business with the flexibility of WordPress, combined with a streamlined user interface, and backed by the legendary support and reliability of Pagely. Even better – it’s not just MWCS customers that benefit from all the hard work that we’ve put into the platform – we’re passing several platform improvements on to Pagely customers as well! As we move into 2023, keep an eye out for more updates as we bake them into the Pagely platform. A Beautiful, Accessible Atomic Experience Another area we’ve been working on is improving the Atomic interface. In 2022, we’ve already rolled out phase 1 of the new Atomic user experience. So far, we’re off to a great start with identifying low-hanging fruit and attacking them head-on with beauty and grace. As we continue into 2023, we’ll be rolling out even more improvements to Atomic; creating an interface that reflects what Pagely is all about – a powerful, reliable, efficient, no B.S. experience. To learn more about the changed we’ve made and a sneak peek at what’s to come, check out our blog post about our phase 1 Atomic rollout. Painless PHP Updates Yeah, we know – the mere thought of major PHP updates can be stomach-churning. With PHP 7.4 reaching end-of-life, it’s that time again to get your sites updated. As we mentioned in our PHP 8.0 upgrade announcement post, we’ve been preparing so that you don’t have to break a sweat. Pro tip: now is a great time to check your account collaborators and alerts settings within Atomic – make sure your team is fully configured to receive and respond to any alerts they need! A Clean House is a Happy House With it being years end, we also wanted to take this opportunity to remind you of a few housekeeping tasks that will keep your Pagely experience running smoothly in 2023: Take the opportunity to review Atomic account collaborators. Add any new team members who need to be kept in the loop and remove any that are no longer required. While you’re there, don’t forget to check over their alert settings! We want to keep you up to date with any situations that may arise on your account – account alerts are the best way to ensure that the right people can respond appropriately. In typical Pagely fashion, we always put people first. As such, we want to remind you of our holiday schedule. So that our staff can spend time as much time with their families as possible, please take a moment to review our holiday schedule and check in with your account manager with any concerns or expected changes in traffic. In addition to reviewing your Pagely account collaborators in Atomic, the start of 2023 is a great time to perform an annual review of your WordPress sites. A few things you’ll want to review are: Review your WordPress users and their associated roles. Additionally, we recommend users review their security posture – make sure passwords are secure, rotated when necessary, and any applicable 2-factor authentication methods are configured. Audit your 3rd-party plugins and themes to ensure they’re fully up to date. Additionally, check to make sure any premium plugins/themes have current licenses so that any updates are properly managed. Review your deployment processes and implement responsible development practices wherever possible. Don’t let a botched deployment become your Achilles heel! Most importantly of all, from our family to yours, we want to wish you the happiest of holidays and an incredible 2023. Cheers! Team Pagely
“Polkit (formerly PolicyKit) is a component for controlling system-wide privileges in Unix-like operating systems. It provides an organized way for non-privileged processes to communicate with privileged ones.It is also possible to use Polkit to execute commands with elevated privileges using the command pkexec followed by the command intended to be executed (with root permission)” A critical vulnerability has been made public in this component with CVE-2021-4034. According to the researchers who found the issue, this component was vulnerable since its creation in May 2009 and any unprivileged local user could exploit it to obtain root privileges. Timeline 2021-11-18: Advisory sent to secalert@redhat.2022-01-11: Advisory and patch sent to distros@openwall.2022-01-25: Coordinated Release Date (5:00 PM UTC). How is Pagely Affected? All our customers were updated immediately on the same day this vulnerability was public. Rest assured that your Pagely sites are protected. For further information on how the vulnerability can be exploited, see also the original advisory: https://www.qualys.com/2022/01/25/cve-2021-4034/pwnkit.txt
As you may have heard, there’s a major security vulnerability floating around right now called Log4Shell. If not, let’s get you up to speed. Log4Shell is a critical software vulnerability that is sweeping across millions of platforms. By utilizing a security flaw in Apache Log4J, an attacker is able to execute arbitrary code loaded from LDAP servers when message lookup substitution is enabled. Although new software vulnerabilities are discovered every day, what makes Log4Shell stand out is Log4J’s wide adoption, coupled with the high severity and the level of difficulty involved with identifying vulnerable systems. TL;DR: it’s a big deal. How is Pagely Affected? An important part of Pagely’s security posture is to ensure that all of our systems are well documented, making it quite easy to determine if anything in our stack is vulnerable. Thanks to strict documentation practices, as well as additional security hardening and penetration testing, we’re confident that Pagely customers have not been impacted by the Log4Shell/Log4J vulnerability. Additional Resources We recommend keeping up with information about this vulnerability, as there’s a reasonable chance that other services that you are using be using could be vulnerable. For more information, see this detailed list of software affected by the Log4Shell exploit. For further information on how the vulnerability can be exploited, see also CVE-2021-44228. As always, if we become aware of any additional concerns that arise from this vulnerability, we’ll provide further updates as necessary.
At Pagely, we’re notorious for unleashing groundbreaking products that push the limits of what it means to scale WordPress. Today, we’re pleased to announce our newest offering — Mercury, dynamic site acceleration for WordPress. Prioritized Traffic Through Amazon’s Private Network Speed is everything. To stay competitive in large markets, you’ll need every advantage that you can get. By using the dynamic site acceleration features of Mercury, you’ll gain substantial speed improvements, especially when it comes to traffic that travels over a greater distance. By lowering the number of hops that a packet has to take, latency can be reduced, saving each and every precious millisecond. Security lines? Gate changes? Layovers? Skip the budget flights and jump on a private jet. Mercury gives your data the VIP treatment across the AWS private network. On average, we’re seeing a 20-30% improvement in page load times with Mercury enabled, depending on the location and number of requests being made. Even when your page load times are already less than 500ms, an improvement of 125ms doesn’t sound like much, but it can make a massive difference in every aspect of your site. Higher network performance means better SEO, higher conversion rates, improved user experience, and ultimately more revenue. One Domain, Limitless Origins With Mercury’s dynamic site acceleration enabled, assets can exist in multiple locations while being served from a single domain. That’s right – all of your CDN traffic can be served directly from your primary domain. Say goodbye to subdomains and debugging CORS errors. Enjoy your free time. So What Does It Cost Me? I know what you’re thinking – how much more is this going to cost me? The answer is simple: nothing. Nada. Zip. Zero. In fact, there’s a good chance that you’ll be paying even less by using Mercury than without. Hold Up… What? As we mentioned, one of the great features that we’re including in our dynamic site acceleration service is enhanced routing across Amazon’s vast private network. Since this results in less hops, the side effect is that it requires less bandwidth. Thus, your traffic not only flows faster, it’s also more efficient, resulting in decreased costs. Although we could vomit up some buzzwords and sell it to you as an additional add-on feature to increase our margins, we don’t have any investors to answer to. Our bandwidth costs decrease, so it’s only fair that yours do too. Getting Your Hands on Mercury Before everyone starts beating down our doors for the goods, don’t get too excited quite yet. Right now, Mercury is in a closed beta period for select Pagely customers. If you’re currently a Pagely customer, you’ll have to hold your excitement for just long enough to log into Atomic and request early access. In the meantime, to learn more about Mercury and how dynamic site acceleration can improve your WordPress site’s performance, sign up using the form below. We’ll be sending out regular updates to keep you in the loop. <br /> hbspt.forms.create({<br /> region: "na1",<br /> portalId: "6787254",<br /> formId: "d666437d-4eee-417e-b2e0-51688585183a"<br /> });<br />
While most of the obvious flash in WordPress 5.8 is on the user-facing side of things, there’s some awesome quality of life improvements in here; focusing on improvements to blocks and how different elements of a WordPress site interact with each other. Most of all, there’s some really awesome potential to evolve into much larger things as time passes. If you’ve been waiting for a good time to take a deep dive into Gutenberg, stop procrastinating — the time is now. WordPress 5.8 is named in honor of Art Tatum, the legendary Jazz pianist. Image courtesy of WordPress.org. Deeper Control with theme.json and block.json The biggest impact from a development perspective is the addition of the new theme.json and block.json mechanisms for managing deeper control within the Gutenberg editor. It’s certainly the feature that I’m most excited about. Previously, anyone developing a WordPress theme had to put forth a significant amount of effort to avoid conflicts with plugins, consider child themes (and subsequent updates), and jump through numerous hurdles when passing data around. The new theme.json and related block.json metadata files help to soothe that pain by providing a standardized method for accessing and defining properties. Can we finally say goodbye the chaos of overriding a million CSS classes just to get a pagebuilder to stop ripping our styles apart? Time will tell. New Plugin Header Tag: Update URL Inside of WordPress 5.8 is a feature that you might not know about or ever encounter, but it’s a big deal for a lot of us — the new Update URI tag. In previous versions of WordPress, developers had to be careful when naming their plugin directories to avoid them being overwritten by another plugin with the same slug. This usually meant that one-off plugins couldn’t be named something like `generic-name`, because the WordPress updater could come along, find a plugin with the same slug on the plugin repository, and overwrite it with an entirely different plugin. Plugins and themes can now take advantage of the Update URI tag to provide an alternate resource or just disable updates altogether. Further Potential? When writing this up, I got to thinking — this tag could be useful when forking a plugin. Sometimes you just need to fix a bug before the author can provide their own fix or merge in your pull request. If you need to make a quick patch for the plugin to solve a temporary issue, you can disable updates entirely without any other changes. Then, once the fix is available from the author, you can just pull the tag and update normally. There’s some really cool potential here. I’m looking forward to see how this evolves in the future. WebP support As of WordPress 5.8, WebP images are finally supported (shameless plug: Pagely has been supporting them for a while through PressThumb). Just upload your WebP images and enjoy the smaller files and faster load times baked right into WordPress 5.8. WebP lossless images are 26% smaller in size compared to PNGs. WebP lossy images are 25-34% smaller than comparable JPEG images at equivalent SSIM quality index. delete_theme and deleted_theme Action Hooks Plugins have been able to fire additional actions upon deletion for a while now, but themes had always been left out. Since nobody puts Baby in the corner, themes can now clean up after themselves too! (Yes, I’m bad about it too, but you really should add database cleanup hooks to run on deletion. A clean database is a happy database.) Better Control Of Revisions Based On Post Type Storing a ton of revisions can get chaotic for your database. Although revisions are super helpful, they’re not always necessary for every post type. To have any additional control, you’d have to use the wp_revisions_to_keep filter and check the WP_Post object for the post type or whatever else you might need. WordPress 5.8 expands on this by providing an alternative wp_{$post_type}_revisions_to_keep filter, allowing you to just return a value for a specific post type. Is it groundbreaking? Not really. But it’s a sanity feature that will certainly come in handy. Additional WordPress 5.8 Resources WordPress 5.8 – Field Guide Introducing theme.json in WordPress 5.8 WordPress 5.8 adds WebP support Introducing “Update URI” plugin header in WordPress 5.8 When will Pagely customers receive WordPress 5.8? We generally do not upgrade existing sites to the first new release of WordPress. We’ll wait for version 5.8.1 or 5.8.2 so any bugs can be addressed. However, if you’re ready to take the plunge you can contact our support team if you want to upgrade your existing site.
An unauthenticated SQL Injection vulnerability affecting versions of WooCommerce on more than 5 million websites on the Internet has been disclosed to the public by Automattic. Due to the nature of the vulnerability, the WooCommerce team is rolling out compulsory patching on minor versions — even if automatic plugin updates are disabled within WooCommerce or Pagely. Vulnerability Details We won’t provide specific details, but we can say that the function wc_sanitize_taxonomy_name allowed the vulnerability to happen due to the use of nested urldecode functions. How Pagely Customers are Affected We have directly reached out to all of our clients who are using an affected version of WooCommerce. In case you did not receive that notification, please be aware that patches are being rolled out by the software vendor directly, not by Pagely. We are monitoring for problems on our end, and will conduct periodic scanning to confirm all sites hosted by Pagely are getting the update. If we see any issues affecting your site specifically, we will reach out with a support ticket. If you manage your codebase using Git, please make sure the patched version makes it into your repository to prevent a regression during your next deployment. Conclusion While very rare, vulnerabilities of this severity require proactive action to keep you protected. This is the reason why WooCommerce decided to force minor versions updates. To be clear, even if you have requested Pagely to not apply automatic updates, this update coming from the vendor directly will still occur. We wanted you to know that we are aware about this vulnerability. Since the very moment it was made public, we have been following along and making sure our customers are aware as well. If you have any questions please do not hesitate to contact our support team.
Today, we’re proud to announce our latest open source WordPress plugin – Really Rich Results. RRR can be used to generate Schema.org structured data, empowering you to take advantage of Google’s Rich Results and featured snippets with minimal effort. Humble Origins Really Rich Results started as an internal Pagely tool, developed by Jeff Matson. We wanted a plugin that could build structured data from a multitude of content types and nested child elements. Although other WordPress plugins already exist, they never seemed to be quite what we were looking for. Some lacked extensibility, others required too much manual configuration, and many come bundled with other solutions that weren’t always necessary. Over time, the idea blossomed into a full-fledged plugin. We could intelligently detect common patterns, integrate easily with other plugins, and provide robust customization options for the power users who want more control. As RRR evolved, it felt like a great way to give back to the WordPress community that has supported us over the last decade. Thus, we began rolling it all into a standalone solution that we’re releasing for everyone to use. Making Structured Data and Rich Results Approachable Schema.org structured data has been around for quite a few years but has only recently gained mainstream traction due to Google’s Rich Results and featured snippets. Since then, it has exploded. It has become a well-known fact that providing structured data markup on your website will positively impact your search engine rankings. The unfortunate side effect of this rapid growth is that there’s quite a bit of conflicting information regarding best practices, how to implement some properties, and how those properties may change contextually. Creating robust structured data on a WordPress site can be quite an undertaking. It often requires a significant amount of skill and experience to build and implement. The result can often be incomplete or, even worse, invalid – resulting in little to no benefit. Really Rich Results aims to provide every WordPress site with sane defaults and a rich set of customization options for easily adding JSON-LD structured data to your content. Whether you’re a beginner who wants to take a “set it and forget it” approach, or a seasoned SEO professional with a desire for deep customization, Really Rich Results dramatically simplifies the entire process. Really Rich Results – Schema Information Features Extensive Support for JSON-LD Schema Types Really Rich Results features several schema types that can be used as-is, extended, tweaked, or entirely replaced on a granular basis. These types include the following: AboutPage Action Article BreadcrumbList CollectionPage Comment ContactPage CreativeWork ImageObject ItemList ListItem MediaObject Offer Organization Person PostalAddress Product Review SearchAction Thing WebPage WebSite And many more through contextual properties within these types. Due to how these types are built, creating additional structured data objects is a breeze. Over time, we’ll be adding many new types with the goal of offering coverage for all properties within Schema.org and Google’s Rich Results. Full Customization Options Although Really Rich Results is fully functional without providing any customization whatsoever, it truly shines with granular control. Changing how your structured data gets output can be customized on multiple levels – from global site-wide changes to individual items and everything in between. In the future, we’ll be offering even more control with better detection and deeper contextual recommendations based on the limitless scenarios that WordPress sites of all sizes may encounter. Simple and Standardized Extensibility Extending Really Rich Results is as simple as providing it with an object to use by either extending an existing object or creating an entirely new one. 3rd-party developers can build entirely new schema objects with only a single hook and class that describes the object. Always Open and Nag-Free The freemium plugin model is booming, and it’s pretty common to install a plugin and later find out that you need the premium version for the full features. This isn’t always a bad thing – plugin developers need to make money to keep their products alive and kicking – but we’re an enterprise hosting solution, not a plugin shop. Since we’re already paying the bills with our main product, Really Rich Results will always be fully featured, nag-free, and never shove advertising down your throat. If you’re a Really Rich Results user and decide that Pagely’s managed WordPress hosting is a good fit for you, awesome. If you only want to use our plugin and don’t care at all about Pagely, we’re totally cool with that too. We’ll keep improving RRR and releasing new features for everyone’s benefit. Really Rich Results in the WordPress admin. Hey look, no nagging and annoying info! How It Works Really Rich Results provides defaults for different “views” on your site. It knows what kind of content is being served and builds JSON-LD structured data based on what it sees. Depending on what it detects, it takes different approaches to accurately build everything up before eventually placing it on the page. Since every site can be different, RRR offers granular customization while inheriting properties from parent elements. For example, you may want to set a specific page as the ContactPage schema type. Really Rich Results allows you to select this type with a single click on the page level, building the applicable properties from other elements as needed. Really Rich Results – Schema settings in the post editor. For The Nerds Really Rich Results detects what content is being loaded, determines the page’s primary focus, then builds any supplemental properties that the primary object needs. From there, it waits until everything else on the site has finished loading, transforms it all into structured data objects that then get output on the page. I know what you’re thinking – that all sounds complicated and heavy on server resources. On the contrary, because of this approach, only the relevant content is being processed. Since items compile while they’re already being processed by WordPress and only performing additional processing of schema properties when necessary, everything generates quite quickly. Additionally, this methodology allows repeated objects and properties to live in a cache for future use. How to Get Really Rich Results Really Rich Results is available on the WordPress.org plugin directory for FREE. We’d Love Your Feedback and Contributions We’d love to hear your thoughts about Really Rich Results. Whether it’s contributing code, testing weird use-cases, reporting bugs, writing unit tests, or anything else, your feedback is valuable. Let us know your thoughts in the comments or on GitHub. Of course, pull requests are always encouraged!
You know when you've written an awesome blog post about optimizing #WordPress sites for speed but can't figure out the right title for it because you don't want it to sound all clickbatey? That's us right now. — Pagely (@Pagely) November 12, 2020 Across the various WordPress community groups and forums, there’s no doubt that page load times are a hot topic. Amateurs and professionals alike are always trying to squeeze out every ounce of speed they can. Although most of the questions regarding sluggish page load times come from new site owners, even seasoned professionals will chime in occasionally. Most of the time, questions about site performance and page load times follow a pretty typical pattern. It starts with someone saying that their WordPress site is slow, asking how to fix it, and providing a link to a GTMetrix or Google PageSpeed Insights report. What typically follows is widely varying recommendations for plugins, caching, CDNs, and hosting. Unfortunately, many of the recommendations only scratch the surface. They’re vague, provide a blanket solution, or don’t address the root cause whatsoever. Even when someone makes a proper assessment and offers a solution, it rarely explains how they came to that conclusion or addresses the root problem. It’s just left with a band-aid answer. Thus, the cycle continues. The purpose of this post is to provide a starting point for evaluating and investigating site performance, specifically when it comes to page load times. It’s not a promise of 1-click results or any magic solution. It’s intended to provide a more in-depth explanation of how to identify issues, why they happen, and how to fix them. Let’s jump in! A Practical Example Let’s get started by using a practical example to investigate. For this, I’m using my friend Erik’s website. Since he’s technically inclined but doesn’t have any former WordPress experience, his site makes is a great example mix of right decisions and happy accidents. He has a website called FunkyBop, where he sells themed mystery boxes. While it’s not noticeably slow, it’s also not very fast either. Let’s see if we can figure out what’s holding it back and squeeze out some extra speed. Locating Problematic Pages We’re going to start by identifying which pages and content types need the most improvement. Locating the slowest content from the very beginning not only helps improve the average across the site, but it’s also the lowest hanging fruit. Additionally, there’s a high chance that these improvements will affect other pages that share the same elements. For this site, I’ve broken it up into a few different content types: Home page Shop landing page Product category page Single product page Let’s go ahead and test them using GTMetrix: From here, we can see that most of the pages load somewhere around 3.5 seconds, but one particularly stands out — the single product page, which loads in 5.4 seconds. It looks like we’ve found our starting point. Let’s start digging around! Investigating Factors, Not Grades The first thing that people usually notice is the PageSpeed and YSlow grades. They’re easy to read and provide a target to achieve. While these grades can be useful for a broad picture, they ultimately exist as general suggestions, not rules. I’ve seen countless posts from people who get an “A” grade but still wonder why their site is still slow. Just because you’re checking off a list of general guidelines doesn’t mean that everything is ideal. On the product page that we’re working on improving, let’s look at a few key factors. The first is the total page size, which is 1.94MB. That’s certainly not bad for most sites. But what we do see here is a large number of requests happening to load that ~2MB of data. 158, to be exact. Let’s take a more in-depth look in the next section to see if that’s making a difference. Digging Through the Waterfall To investigate the requests that are happening when someone visits the page, we can look at the waterfall chart. This chart allows us to see what requests happen, when they occur, and how long different segments take. Initial Request Time The first thing that sticks out when we test this site is the initial request. After the page is requested, it’s taking almost a full second before it’s delivered by the server, not including additional resources, such as images, styles, or scripts. That’s certainly not good. Why is it taking so long? You’ll notice that the bar on the timeline is color-coded. This color-coding allows us to easily see what portions of the request take different amounts of time. The purple section is the time spent after the server has received the request and the browser is waiting for a response. On this page, our browser is just sitting there, twiddling its thumbs for 740 milliseconds. From here, we can assume that there’s either a backed up worker queue on the server, lots of heavy code being processed, a slow database query, or a combination of all three. Although we won’t go into all of the endless details of performance testing your code, we can at least attribute some of the slowness to the initial request. Timeline Steps Within the timeline, you’ll notice several vertical lines. These lines correspond with different steps that take place while the browser is rendering a page. Here’s a quick overview of those steps: First Paint: Marks the point when the browser has started rendering anything on the page. DOM Loaded: Marks when the browser has determined that the page is ready to start displaying. Onload: Marks the moment when the browser has finished downloading everything. Fully Loaded: Occurs 2 seconds after the last request completes, and the page is ready for consumption. At this phase of our investigation, we want to look at two things: First Paint and DOM Loaded. Since the browser isn’t displaying anything until the first paint, improvements in these stages have the most significant impact on user experience. If we look at all of the items that happen before the first paint occurs, we see a significant amount of CSS and JavaScript files being downloaded and processed from multiple different sources. Although HTTP/2 uses multiplexing to download multiple items simultaneously to achieve faster load times, browsers will still enforce limitations on the number of things that they can process at once. If it hits the limit, it won’t process the next item until another has finished. Key elements to look at are how long other tasks block the item and how long the browser is waiting for a server response. Since these are static files, the wait time should usually be minimal; regardless of the queue and file size. But as we see here, many of the assets are waiting over 100ms for the server to respond. Combining that with the resource queue filling up, we have a pretty good chunk being added to the page load time — almost 1.5 seconds! Investigating Excess Assets So far, we’ve identified several issues. We’ve determined that the initial page load and several static assets suffer from a slow server response time. We’ve also discovered that there are quite a few different scripts and styles being loaded. Both of these issues combined are exacerbating each other, leading to long page load times. While those issues are the most problematic for this particular example, there are still a few more things we’ll want to check on before we can say that the problems have been thoroughly investigated. Locating Excessive Image Assets One of the most common things that people run across when facing slow page load times is un-optimized images. Since WordPress is so approachable, non-technical users tend to upload images that are significantly larger than necessary. By looking at the waterfall chart, we can see all of the resources being loaded. If we sort by images, we can quickly get a look at what images are retrieved and the timings associated with them. In this example, the site owner has done a pretty good job of keeping images small, but with one exception – one of his product images is way too large in both file size and dimensions. Clocking in at 835 KB for a 2160×2160 image, it can easily be reduced by over 50% with just some simple resizing and resampling — all without any perceived quality loss. Identifying Unnecessary Fonts When looking through the resources that are being loaded on the site, another thing that I noticed was that 10 of the requests are for font files. Although these requests are processed quickly, they’re still extra requests that the browser needs to send and wait for a response. If there are any unused or redundant font files being loaded, it’s always a good idea to remove them. Where to Go from Here Analyzing slow page load times on a WordPress site takes a fair bit of knowledge and time to look into, but it’s well worth the effort. While we couldn’t possibly go over every aspect of investigating the various items that contribute to sub-par page load times, I hope that I’ve equipped you with enough information about why your pages might be loading slowly. Stay tuned! In part 2 of this segment, we’ll take a deep dive into this site, resolve the issues, and provide more information on how we fixed them!
Catch up on the conversation. This piece comes as a followup to our CEO Joshua Strebel’s recent piece on why you should never pay your WordPress host for pageviews. Managed WordPress hosting has exploded across the hosting industry since Pagely created it way back in 2006. Overall, the level of innovation that the mainstream has brought to WordPress is amazing. Unfortunately, this sometimes comes with negative side-effects. As investors demand higher profits, many of these managed WordPress hosts are looking for ways to gain a quick and easy boost to their margins. Like we mentioned in our definitive guide to PHP workers, there are a few different metrics that are often misinterpreted, whether intentionally or not. Today, we’re going to talk about pageviews and their role in the managed WordPress hosting world. What Counts as a Pageview? Many hosts cite pageviews as a billing metric with vague term descriptions, if any exist at all. Even with clarified definitions, pageviews tend to mutate on a host-by-host basis with a dizzying array of buzzwords. Although individual managed WordPress hosts vary on what they consider a “pageview”, most define it like this: pageviews are the number of unique IP addresses that access pages on your WordPress site per day, excluding known bots. (Yeah, I know. The term is more like “the number of unique request sources per day, based on predefined catch-all filters.” But I wasn’t the one that made this up, so bear with me here.) Pageviews as a Billing Metric Before we dig in any deeper into the details, let’s think about the definition that I mentioned. Considering that narrative, which of these sites would be more expensive to host? A directory website that houses billions of pages, showing 1,000 entries at a time. A personal blog who’s majority of traffic is to single posts. A documentation site where visitors typically access multiple pages per day. A simple informational website for a local restaurant. An online shop that sells small-batch artisanal snake oil. An API that uses headless WordPress to dynamically fetch content. If you guessed that they represent the same metrics in the eyes of pageview-based billing, your logic prevails! Assuming that they all experience the same number of unique visitors, they’re all treated the same way, regardless of how resource-intensive the site is or how many pages each visitor accesses. Of course, it would be silly to treat all of these sites the same way. Some pages might have large amounts of fully dynamic content, while others might be eternally served from cache. Using an umbrella term like pageviews as a billing metric causes several different issues, based on the site being hosted. Setting Realistic Expectations The first issue that pageview-based billing causes is an unrealistic expectation for low-performance, unoptimized sites. If you’re not familiar with how WordPress sites perform on the server side, you might think that these limits matter exclusively and use them to determine what capabilities you need from a hosting provider. If you’re evaluating your hosting options purely based on a pageview limit, you’re ignoring critical factors that can dramatically impact your WordPress site. Performance-Based Punishment What if you’ve put hard work into optimizing your site? Well, now you’re wasting money on a billing metric that doesn’t matter. If your site is highly cacheable, you’re feeling the impacts even further. When serving content from cache, it’s practically free from a hosting perspective. You’re being nickel-and-dimed for something that is entirely outside of your control. Knowledge is Power As you scale, knowing what you’re paying for can quickly become a critical part of maintaining a healthy, cost-effective site. Coupled with the fact that the same people who are charging for pageviews tend to hide their underlying infrastructure behind the curtains, there’s no way to verify or manipulate what’s being counted. While many claim that they exclude the known IP addresses and user agents of known bots from counting towards your monthly pageview bill, what’s their motivation for even bothering? After all, more pageviews mean more money in their pockets. Pageviews Are Still Helpful Many of our prospective customers ask, “How many pageviews can ‘x’ plan handle?” Like we mentioned earlier, the impact of each pageview dramatically varies based on the type of content viewed and how your site was developed. Every WordPress site is different. It would be impossible to accurately estimate the ideal hosting environment without a complete understanding of what your site’s usage looks like. Although pageviews are a faulty metric as far as billing is concerned, they can still be useful as part of a larger dataset. Whether you’re scaling, optimizing for better performance, or shopping for a new hosting provider, pageview data plays a role in getting a full picture of your site. For example, if you were running benchmarks to determine your server’s capacity, pageview data could be used to identify what a typical day looks like. Let’s say your site looks something like this: 10% of traffic is on the home page. 30% of traffic is to various landing pages. 30% of traffic is to different blog posts. 20% of traffic is to your product’s pricing page. 10% of traffic is on account-related pages. Using that traffic data, you could then estimate what types of content have the most significant impact on server resources. You may find that some pages underperform and could use some optimization, while other highly-cacheable content, like blog posts or landing pages, remain essentially free. If 90% of your traffic goes to fully-cached content, increases in traffic have a trivial impact on your bottom line, if any. Closing Remarks Operating a WordPress site at scale is all about a delicate balance of performance and cost. At Pagely, we partner with our clients to help them achieve those goals. While shopping for a new hosting provider, be on the lookout for hosts that impose false barriers, such as PHP worker limits and pageview counts. Those vague limitations are a red flag that they’re going to try to upsell you later. (These limitations are also a sign that they’re trying to sell you on shared hosting resources too, but that’s a topic for another discussion entirely.) We hope that you’ve learned more about the impact of pageviews (or lack thereof) and why it’s not an appropriate billing metric. Do you have any experiences with pageview-based billing that you’d like to share? Is there something we missed? Let us know in the comments below.
At Pagely, we pride ourselves on providing the best possible solutions for our customers. Sometimes, that requires dedicating a significant amount of research and development into truly discovering what works best. Whenever possible, we always want to take a data-backed approach. That’s why we’ve run several benchmarks to determine how PHP workers impact a site’s performance. By running these benchmarks, we can ensure that our customers can get the maximum amount of value out of their hosting environment. So without further ado, let’s take a look at our tests and what we found! Our Testing Environment First, let’s talk about the environment that we tested on. To ensure that our tests can be replicated and improved upon, we’ve performed them on a standard Amazon EC2 instance that anyone can spin up. In addition, we’ve also made the entirety of our benchmarking code available on GitHub. Our PHP Test Scripts As anyone who’s done any benchmarking knows, a primary concern is eliminating any potential noise within the tests. An excellent way of testing PHP worker behavior is to have the worker perform tasks that are heavy on CPU, without impacting things like disk I/O or network latency. To accurately benchmark PHP worker behavior, our test code is a simple PHP script that encrypts and decrypts a string several thousand times to perform CPU-heavy activity. Here’s the gist of how it works: // Encrypt and decrypt a string 50,000 times. while ( $times_run < 50000 ) { $encrypted = openssl_encrypt( $string, $method, $key, null, $iv ); $decrypted = openssl_decrypt( $encrypted, $method, $key, null, $iv ); $times_run++; } Our Benchmarking Environment On the client-side, we wanted to ensure that we’ve eliminated noise from any outbound requests, while performing tests that accurately reflect different CPU core counts. For this, we wrote up a shell script that runs the k6 benchmarking tool against various Docker container configurations that reside on the server. By using Docker, we’re able to run a single script that utilizes different numbers of CPU cores and runs benchmarks against each separate core count. Just like our PHP code, it also resides within GitHub for you to take a look through. Here’s how we’re doing it: #!/bin/bash mkdir -p reports # “num-cores cpuset” for row in “1 0” “2 0-1” do set — $row cores=$1 cpuset=$2 for worker in {1,2,8,50,100,200} do pworker=$(printf “%03d” $worker) pcores=$(printf “%02d” $cores) file=reports/${pcores}core-${pworker}worker.txt json=reports/${pcores}core-${pworker}worker.json if [[ ! -f $file ]] then ./run-php.sh $cpuset $worker $file ./run-bench.sh 3 $worker $json >> $file fi done done Our Results When processing our results, we need to eliminate things like network latency. Thanks to the k6 benchmarking tool, we were able to do that with ease by looking at http_req_waiting times and using the following statistics: Average response time Minimum response time Median response time Maximum response time p90 (maximum response time for the fastest 90% of users) p95 (maximum response time for the fastest 95% of users) Requests per second Requests Per Second Within our tests, we’re sending a number of virtual users that match the number of cores being utilized by our testing environment. For example, when running a test against an environment with 4 CPU cores, our benchmark uses 4 simultaneous virtual users. These users perform a request, wait for a response, then immediately send another request. They’ll continue doing this for our testing duration of 60 seconds. Here’s a graph of the results: As you can see, a balance of PHP workers and CPU cores is what matters most. Without enough dedicated CPU workers to handle the influx of traffic, requests can’t be handled efficiently. In contrast, after a certain point, adding more PHP workers has a minimal impact on the number of requests per second that the server can handle. Once at critical mass, there’s even a negative impact on the site’s performance as too many PHP workers are added. Looking even further at the data we’ve collected, we can quickly determine that the optimal number of PHP workers varies based on the number of CPU cores being utilized. Of course, on different workloads (different levels of code optimization) or digging deeper into different PHP worker counts, we may have found a slightly more optimal worker pool. Still, overall, it’s a pretty good starting point. Looking deeper into how the number of PHP workers impacts the number of requests per second that our test environment could handle, we see some interesting data. For example, let’s look at our 32 core test: 1 Worker 4.99999187 2 Workers 10.86665914 8 Workers 41.84993377 50 Workers 160.8998032 100 Workers 162.8164824 200 Workers 160.1164952 400 Workers 159.6665389 As you can see here, the number of requests per second that can be handled steadily increases as we increase the PHP workers, until it reaches 50 workers. At 100 PHP workers, we see a slight increase; then, at 200 and 400 workers, we see performance declining. This is a common trend amongst all of our tests. After a certain point, adding more PHP workers will decrease performance overall. While the impact is at rather low levels, this can be majorly problematic for sites that don’t have a dedicated resource pool. Many shared/cloud WordPress hosts have thousands of people all on the same server. Since the number of PHP workers will need to be significant (and they’re always screaming about how many PHP workers they have), every site on the server is negatively impacted by the worker pool as a whole. Measuring Response Times While the overall capacity of requests per second is a reasonable determination of how many users a site can support simultaneously, it’s not the only metric at play here. In our tests, we also measured response times, based on the number of PHP workers are active for environments with different core counts. Here’s what our chart for a 32-core environment looks like: As you see here, there is a measurable increase in response times as the number of PHP workers increases. While our Requests Per Second statistics only change slightly between 50 PHP workers and 400 PHP workers, request time shows an entirely different story. Let’s take a look at the raw data to look a bit closer at what’s happening: Worker Count Average Minimum Maximum Requests Per Second 1 Workers 199.39 198.50 208.07 4.99 2 Workers 183.67 177.27 188.35 10.86 8 Workers 190.71 175.81 197.59 41.84 50 Workers 309.70 189.69 1034.55 160.89 100 Workers 610.20 191.03 2903.50 162.81 200 Workers 1230.40 194.00 3937.65 160.116 400 Workers 2422.41 197.82 10478.34 159.66 When running a static count of 50 PHP workers across 32 cores, we see that we can make 160 requests per second, with an average response time of 309ms. When we increase that worker pool to 100 PHP workers, we see that we can handle an additional 2 requests per second, but our average response time increases to 610ms. That’s a 97% increase in the time it takes PHP to handle the request for only being able to handle an additional 1.25% more users! As we increase the worker count even further, we see that while the number of requests per second stays roughly the same between 50 and 200 PHP workers, our average response time increased by almost 300%. From this data, we also see that while a lower number of PHP workers in the pool can handle less traffic, our response times do become lower. At 8 PHP workers, we’re only able to process roughly 41 requests per second, but our site became much faster — responding in around 190ms. Putting It All Together After running our various benchmarks, we see plenty of interesting data that everyone can use to optimize their WordPress sites better. Within our benchmarks, we’ve proven that the number of available PHP workers on your server does indeed matter quite a bit — just not how you might have imagined. Now that you’ve seen the data, we’d also like to stress the importance of running an optimal number of PHP workers that is appropriate for your WordPress site, rather than just guessing or taking a cookie-cutter approach. More PHP workers do not necessarily mean increased performance. If not tuned to the exact specifications of your site, changing your PHP worker count can be catastrophic to your site. If you’re attempting to tune your site’s performance to handle more traffic or get faster page load times, you’ll want to do so in a way that’s specific to your website and the server on which it resides. Generally speaking, if you want to increase the number of users that your site can handle simultaneously, you’ll want to increase your PHP worker count. If you want your site to be faster, decrease the worker count. If you want a mix of the two, you can always use dynamic limits with a minimum and maximum worker count. Having a dynamic worker count allows you to handle spikes in traffic while offering faster page load times when you’re receiving an average amount of traffic. Want to share your thoughts? Have an idea for performing even better tests? Want to see something else benchmarked? Let us know in the comments below.