1 Billion Rows Challenge in PHP

1-billion-rows-challenge-in-php

You’ve probably heard of the 1BRC (1 Billion Rows Challenge).

In a nutshell, it’s a challenge (originally aimed at Java programmers) to write code that can solve a data calculation and aggregation problem on a file with 1 billion rows in the fastest way possible.

While we can all agree that it is not indicative of the quality of the language or the typical usage scenarios, it is an activity that leads to a better understanding of the programming language being used and to the discovery of more powerful techniques.

The challenge was very successful and many have tried to transfer the concept to other programming languages.

Today, we will try to tackle the challenge in the language that, at first glance, seems to have the least to say: PHP.

Let’s start by reading the task specifications:

The file with the data (measurements.txt) contains one billion rows. Each row represents the temperature recorded in a weather station and is composed of two fields: the station name and the detected temperature, separated by the ; character.

Example:

Hamburg;12.0
Bulawayo;8.9
Palembang;38.8
St. John's;15.2

The task is to read all the lines and calculate the minimum, average and maximum temperatures for each weather station, rounded to one decimal place.

Finally, you have to display this data on the screen in alphabetical order in the format ://.

Example:

{Abha=-23.0/18.0/59.2, Abidjan=-16.2/26.0/67.3, Abéché=-10.0/29.4/69.0, Accra=-10.1/26.4/66.4, Addis Ababa=-23.7/16.0/67.0, Adelaide=-27.8/17.3/58.5, ...}

The winning script took 1,535 seconds to finish; the top 5 finished under 2 seconds.

Getting to this point in PHP seems daunting, but let’s see how far we can go!

DISCLAIMER: I will not be using advanced debugging and profiling tools to determine where to optimize during testing, that’s not the purpose of this article.

Initial Specifications

We need to find a way to measure the performance of a PHP script. This is where the first problems come in:

  1. How can we measure the execution time?
  2. How much data should be passed to the script to ensure reliable results?
  3. Where should we run the script? Should we do it locally or elsewhere (a dedicated server, a VM, etc…)?

Measuring the execution time

All Linux installations should have the command time; by placing it before a command, the operating system can return the execution time of the command passed.
This would seem to be the ideal solution, but there is a problem: it is not very precise, especially if we make it profile only one execution of the PHP script.
A better approach is to use the command perf and pass it the option -r, followed by the number of times you want to run the command to profile.
Example: perf -r 10 my_command

How much data should be passed to the script

Running the PHP script on a billion rows could take a long time; we can use a small set of 1 million rows to start doing some tests. Then we can gradually increase the number of rows until we reach 1 billion.

Where to run the script

Here, just like before, we can take a step-by-step approach. First we can run the benchmark locally on our computer, to see how the different versions compare.
There’s one important thing to remember: close all non-essential programs and make sure that the tests are always run under the same conditions.
The version of PHP locally installed is 8.3.14.

Then, when we have an optimized enough PHP script, we can move on to a dedicated server or a virtual machine.
We will need a dedicated server for a short time; the best option would be to use a VM or a cloud instance.

Writing the code

First attempt

Let’s start by writing a solution in PHP in the simplest way we can think of:



$fp = fopen('data/measurements.txt', 'r');

$stations = [];
while (($line = fgets($fp, 4096)) !== false) {
    [$station, $temp] = explode(';', $line);

    if (!isset($stations[$station])) {
        $stations[$station] = [];
    }

    $stations[$station][] = $temp;
}
fclose($fp);

$results = [];
foreach ($stations as $key => $value) {
    $results[$key] = [];
    $results[$key][0] = min($value);
    $results[$key][1] = max($value);
    $results[$key][2] = array_sum($value);
    $results[$key][3] = count($value);
}

ksort($results);

echo '{', PHP_EOL;
foreach ($results as $name => &$temps) {
    echo "t", $name, '=', $temps[0], '/', number_format($temps[2]/$temps[3], 1), '/', $temps[1], ',', PHP_EOL;
}
echo '}', PHP_EOL;

We open the file data/measurements.txt for reading, populate an array with this data, and finally calculate the minimum, maximum and average temperature for each weather station.
We sort the results alphabetically and print them on the screen.

We have already created the file measurements.txt with 1 million lines using the semi-official tool create_measurements.py:

python3 create_measurements.py 1_000_000

We can then launch the first PHP script using the command:

perf stat -o results/calculate_1.log -r 10 -d php src/calculate_1.php

The -o option specifies where to save the log with the execution results.
To have reliable data we run the script 10 times (with the -r option).

It took 1.066 seconds on the laptop I use for writing (Intel N5100 CPU with 8GB of RAM).
We don’t have enough information to understand whether it is a lot or a little.

Second attempt: trim()

From the screen output we see some strange line breaks that shouldn’t be there. Let’s try to apply a trim() on the temperatures:

$stations[$station][] = trim($temp);

We rerun the script and it takes 1.048 seconds. The improvement is minimal, but the screen output no longer shows those extra line breaks.

Third attempt: (float)

The “solution” of using trim() does not seem to be the best; a quick analysis of the code shows that PHP stores the temperatures in the array as strings and not as floats. Let’s try casting to float instead:

$stations[$station][] = (float) $temp;

The execution time is 0.62 seconds, a big improvement over the previous attempt!

Perhaps we can also increase the number of rows from 1 million to 10 million to better appreciate the timing variations.
Let’s try running the same script on 10 million rows:

PHP Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 20480 bytes) in src/calculate_3.php on line 13

PHP’s current memory limit is 128MB; the script tries to create an array with all the values of the file measurements.txt, exhausting the 128MB.
Now we could raise that limit, or rewrite the code that takes care of aggregating and processing the data.
Let’s proceed with the rewriting of the code avoiding putting the entire input file in memory.

Fourth attempt: while

Let’s try to read the data and process it in a single while loop:

$results = [];
while (($line = fgets($fp, 4096)) !== false) {
    [$station, $temp] = explode(';', $line);
    $temp = (float) $temp;

    if (!isset($results[$station])) {
        $results[$station] = [$temp, $temp, $temp, 1];
    } else {
        $results[$station][0] = $results[$station][0] < $temp ? $results[$station][0] : $temp;
        $results[$station][1] = $results[$station][1] > $temp ? $results[$station][1] : $temp;
        $results[$station][2] += $temp;
        $results[$station][3]++;
    }
}

This way we don’t saturate the memory with the contents of the measurements.txt file.

Execution time on 10 million rows: 6.14 seconds
This time doesn’t tell us anything about a possible improvement/deterioration of performance; let’s try with the 1 million row dataset.

Execution time on 1 million rows: 0.76 seconds
The time got worse (as expected); we traded the lower RAM usage for a higher execution time.

Fifth attempt: min()/max()

This time we try to use the native PHP functions min() and max() to save the minimum and maximum value of each station:

$results[$station][0] = min($results[$station][0], $temp);
$results[$station][1] = max($results[$station][1], $temp);

Execution time on 10 million rows: 6.01 seconds

It’s a small improvement, but an improvement nonetheless.

Sixth attempt: if

Perhaps we can simplify those two lines of code even further, using two simple if statements:

if ($temp < $results[$station][0]) {
    $results[$station][0] = $temp;
}

if ($temp > $results[$station][1]) {
    $results[$station][1] = $temp;
}

Execution time on 10 million rows: 4.90 seconds

Here we are, another important step in the right direction.

Seventh attempt: !isset()

The situation inside the while loop is this:

if (!isset($results[$station])) {
    // ...
} else {
    // ...
}

We are used to thinking of that !isset() as if it were a single statement; in reality, however, there are two: there is the not (!) and the isset().
Let’s try to invert the two branches of the if:

if (isset($results[$station])) {
    if ($temp < $results[$station][0]) {
        $results[$station][0] = $temp;
    }

    if ($temp > $results[$station][1]) {
        $results[$station][1] = $temp;
    }

    $results[$station][2] += $temp;
    $results[$station][3]++;
} else {
    $results[$station] = [$temp, $temp, $temp, 1];
}

Execution time on 10 million rows: 4.90 seconds

It’s pretty much the same time as the previous attempt…

Eighth attempt: pointer &

Another attempt we can make inside the if is to use a pointer to the array element that contains the weather station instead of calling $results[$station][0] every time:

$resultStation = &$results[$station];

Execution time on 10 million rows: 4.65 seconds

There is an improvement in timing, we keep this change.

Ninth attempt: fgets()

Let’s focus for a moment on the while that takes care of looping the entire measurements.txt file:

while (($line = fgets($fp, 4096)) !== false) {

Why is there a 4096 as the second parameter of the fgets() function?
I saw it in the example code on the PHP documentation page, so I just mindlessly copied it into the script. However, I later checked the documentation and found that the second parameter “limits” lines that are too long.
Some comments on the PHP documentation page suggest there may be a performance issue when passing this second parameter (which is optional by the way).
Let’s try removing it and see what happens:

while (($line = fgets($fp)) !== false) {

Execution time on 10 million rows: 4.39 seconds

…this confirms the fact that reading the documentation carefully is always useful

Tenth attempt: strtok()

The only point in the code that raises some doubts is the explode() used to split the weather station’s name from the temperature.

Let’s try replacing it with strtok():

$station = strtok($line, ';');
$temp = (float) strtok(';');

Execution time on 10 million rows: 3.82 seconds

With this latest version of the script, it seems that we have reached the end of the optimizations, there are no further points to optimize.

Test on dedicated hardware

Now that we have our most performant script, we can go ahead and run it on a dedicated machine.

Test on EC2 with 1 billion rows

Let’s try to run this script on hardware similar to the one used in the official Java challenge:

  • CPU: AMD EPYC 7502P 32 cores / 64 threads @ 2.5 GHz
  • Memory: 128 GB ECC DDR4 RAM
  • 2x SAMSUNG MZQL2960HCJR-00A07, 1 TB, Software RAID-1

The EC2 instance that most closely resembles it (and is cheaper) is the m6a.8xlarge with its 32 vCPUs and 128GB of memory. For the hard disk I opted for a 200GB io1 volume (to reach 10,000 IOPS).

I tried to launch the last script on the EC2 instance; this time I ran the script only 2 times.
The result was this:

 Performance counter stats for 'php src/calculate_10.php' (2 runs):

         261924.05 msec task-clock                       #    1.007 CPUs utilized            ( +-  0.72% )
               813      context-switches                 #    3.127 /sec                     ( +- 28.17% )
                 2      cpu-migrations                   #    0.008 /sec                   
              1486      page-faults                      #    5.715 /sec                   
         cycles                                                      
         instructions                                                
         branches                                                    
         branch-misses                                               
         L1-dcache-loads                                             
         L1-dcache-load-misses                                       
         LLC-loads                                                   
         LLC-load-misses                                             

            260.06 +- 1.89 seconds time elapsed  ( +-  0.73% )

260.06 seconds, or 4 minutes and 20 seconds… a truly disastrous result.

In a previous attempt (the fourth) we had to change the way the data was aggregated, due to a PHP out-of-memory error; we had seen a significant performance degradation due to this change.
Since the EC2 instance has a lot of RAM available, let’s try to resume the script from the third attempt by applying the changes made from the fifth attempt onwards and raising the memory_limit in the php.ini file.

The results of this test:

 Performance counter stats for 'php src/calculate_in_memory.php' (2 runs):

         259240.23 msec task-clock                       #    0.992 CPUs utilized            ( +-  0.81% )
              1348      context-switches                 #    5.158 /sec                     ( +- 14.73% )
                 5      cpu-migrations                   #    0.019 /sec                     ( +- 10.00% )
            120810      page-faults                      #  462.253 /sec                     ( +-  0.26% )
         cycles                                                      
         instructions                                                
         branches                                                    
         branch-misses                                               
         L1-dcache-loads                                             
         L1-dcache-load-misses                                       
         LLC-loads                                                   
         LLC-load-misses                                             

            261.38 +- 2.10 seconds time elapsed  ( +-  0.80% )

261.38 seconds, one second slower than the previous version.
Probably the difference is irrelevant on such high-performance hardware.

Multi-thread PHP

The only thing left to try is to write a PHP script that takes advantage of the interpreter’s multithread capabilities.
This task turned out to be quite complex for two reasons:

  1. The PHP interpreter must be compiled with the ZTS (Zend Thread Safe) option to launch parallel executions; few Linux distributions provide an interpreter with this feature turned on;
  2. The PHP script needs to be completely rewritten to take advantage of thread concurrency;

For the first point, the only possibility is to compile PHP from sources with the ZTS option active, and then install the PECL parallel module.
On Debian, this is possible using the commands:

# Install build tools and libraries needed
sudo apt-get -y install build-essential autoconf libtool bison re2c pkg-config git libxml2-dev libssl-dev

# Clone and build a stripped-down version of PHP with ZTS support
git clone https://github.com/php/php-src.git --branch=PHP-8.4.3 --depth=1
cd php-src/
./buildconf
./configure --prefix=/opt/php8.4-zts --with-config-file-path=/opt/php8.4-zts/etc/php --disable-all --disable-ipv6 --disable-cgi --disable-phpdbg --enable-zts --enable-xml --with-libxml --with-pear --with-openssl
make -j32
./sapi/cli/php -v
sudo make install

# Install `parallel` module from PECL
sudo /opt/php8.4-zts/bin/pecl channel-update pecl.php.net
sudo /opt/php8.4-zts/bin/pecl install parallel
sudo mkdir -p /opt/php8.4-zts/etc/php/conf.d
echo 'extension=parallel.so' | sudo tee -a /opt/php8.4-zts/etc/php/php.ini
echo 'memory_limit=-1' | sudo tee -a /opt/php8.4-zts/etc/php/php.ini

# Verify module installation
/opt/php8.4-zts/bin/php -i | grep parallel

As you can see, we downloaded and compiled the 8.4.3 version of PHP.

The ZTS version of PHP can be run with the command /opt/php8.4-zts/bin/php.

For the second point (rewriting the PHP script for multithreading), you can take inspiration from the PHP documentation and solutions to the 1BRC challenge available on the Internet.
The one I took heavily from is this one.
The overhead from multithread management mostly comes from having to cycle the measurements.txt file first to split it into chunks that match the number of cores on the machine where the script is running. Each thread will process one of these chunks that, combined, will lead to the final result.

The source code is fully available in the GitHub repository.

The results on the EC2 instance are:

 Performance counter stats for '/opt/php8.4-zts/bin/php src/zts/zts_calculate_1.php 32' (10 runs):

         521063.71 msec task-clock                       #   30.537 CPUs utilized            ( +-  0.17% )
              4416      context-switches                 #    8.521 /sec                     ( +- 12.71% )
                25      cpu-migrations                   #    0.048 /sec                     ( +- 27.73% )
             41060      page-faults                      #   79.228 /sec                     ( +-  0.21% )
         cycles                                                      
         instructions                                                
         branches                                                    
         branch-misses                                               
         L1-dcache-loads                                             
         L1-dcache-load-misses                                       
         LLC-loads                                                   
         LLC-load-misses                                             

           17.0636 +- 0.0181 seconds time elapsed  ( +-  0.11% )

17.0636 seconds… an impressive result considering the previous 260 seconds!

Conclusions

For now, we’ll pause here; we’re left with a time of 17.06 seconds for the execution on 1 billion rows.

In the next article, we’ll see another way to face this challenge with PHP.
I’ll leave you with the summary of the test results on the EC2 instance. The first 9 scripts were run on 1 and 10 million rows dataset. Script 10, the one with all the data in RAM (calculate_in_memory.php), and the ZTS script were run on a 1 billion rows dataset.

Image description

NOTE: The 1M and 1M lines tests were run with PHP 8.4.2; the 1B lines tests were run with PHP 8.4.3.

The full code is available on Github:
https://github.com/gfabrizi/1brc-php

Total
0
Shares
Leave a Reply

Your email address will not be published. Required fields are marked *

Previous Post
in-this-article,-scrumstudy’s-scrum-fundamentals-certification-(sfc)-|-study-guide-–-part-ii:-principles,-i-will-outline-the-six-principles-of-the-scrum-framework-necessary-to-pass-scrumstudy’s-scrum-fundamentals-certification.

In this article, ScrumStudy’s Scrum Fundamentals Certification (SFC) | Study Guide – Part II: Principles, I will outline the six principles of the Scrum framework necessary to pass SCRUMstudy’s Scrum Fundamentals Certification.

Next Post
create-high-impact-content-using-backward-design-—-whiteboard-friday

Create High-Impact Content Using Backward Design — Whiteboard Friday

Related Posts