Posts

Laravel Learning Resources

Inspired by a comment in the Mastering Laravel community, here is a list of resources that have really helped me absorb the ethos of the “Laravel way” of doing things.

Podcasts

The Laravel Podcast has over 130 episodes to date; each season usually has a particular focus. Season 4 in particular covers individual concepts of the ecosystem, while season 5 covers major packages, and season 6 covers more recent developments with Laravel the company. Season 3 interviews well-known and lesser-known developers in the community.

Especially in the first season of the BaseCode podcast, Jason McCreary and Jess Archer discuss general principles not specific to Laravel. This season is basically the podcast form of the book by the same name.

For fortnightly news and events, the Laravel News Podcast fits the ticket. Jacob Bennett and Michael Dyrynda discuss new releases, packages, and tutorials. On the opposite weeks, they also release new episodes of North Meets South, discussing challenges and technical approaches in more detail.

In No Compromises, Joel Clermont and Aaron Saray (self-described as “two old web developers who have seen some things”) discuss a wide range of practical tips for Laravel development, with a special focus on testing.

Mostly Technical is a longer, sometimes-rambly podcast with Ian Landsman (the “godfather” of Laravel) and Aaron Francis. They tend not to be nearly as technical as the others above, but frequently discuss the Laravel community.

Books

The BaseCode Field Guide is a “field guide to writing code that lasts.” It covers a bunch of general recommendations for making your code easier to read and debug.

Laravel Queues in Action provides in-depth look at Laravel’s queue system and how to optimize it for your particular needs.

Other Resources

The Laravel docs: this is the best documentation I have seen anywhere, and I fully credit them for a huge amount of my knowledge about the framework and ecosystem. I’ve read the entire site front to back three times (on different versions) and learned so much every time. While I fully acknowledge that everyone learns differently and reading is not for everyone, I still highly recommend every Laravel developer should read them front-to back. You will learn so much about all the available features and see so many applicable code examples, and will pick up the “feel” along the way.

Spatie open-source packages: you’d be hard-pressed to find more high-quality code written for Laravel than by browsing this site or their GitHub profile.

Harris Raftopolous has been writing an excellent series of daily tip articles exploring features of the framework and how they can be used in real-world applications.

Streaming: I tend not to watch developers live-streaming; it’s just not my preferred method of learning. That said, there are a growing number of great developers streaming and explaining code as they write it. Here are just a few:

Profiling Laravel Artisan Commands Inside a Docker Container

Laravel Herd Pro has a really slick SPX profiler integration. Unfortunately, at work I’m on a Ubuntu machine using a custom Docker container to run our app, so can’t take advantage of it.

We have one specific artisan command that takes many hours to run and I’m working on improving the performance.

I wanted to use SPX profiler to track down what was taking so long, and here’s how I got it to work.

Note: I know this is a bit of an unorthodox approach to using Docker containers, but it works for now…

Step-By-Step

  1. Run docker compose exec www bash to open a terminal inside the container
  2. Install SPX following the directions (note: skip the sudo since we’re already running as a root user inside the container)
  3. Edit /usr/local/etc/php/php.ini and add these lines to enable SPX
    extension = spx.so
    spx.http_enabled = 1
    spx.http_key = "dev"
    spx.http_ip_whitelist = "*"
    spx.http_trusted_proxies = "*"
  4. Run pkill -o -USR2 php-fpm inside the container to restart the PHP service (thanks for the tip)
  5. In a browser, visit http://myapp.test/?SPX_KEY=dev&SPX_UI_URI=/ to visit the SPX Control Panel
  6. Check the “Enabled” box
  7. Open a new tab and visit page or two on the app; refresh the SPX Control Panel and you should see the requests show up at the bottom
  8. Run the artisan command, prefixing it with a couple of env settings: SPX_ENABLED=1 SPX_REPORT=full php artisan list
  9. Refresh the SPX Control Panel and inspect the results

Fixing Laravel Debugbar Exception: File does not exist

I spent way too long this morning debugging an issue with Debugbar for Laravel.

I had recently deleted the storage/debugbar directory while troubleshooting a different issue. Even though I had recreated the directory and all of the filesystem permissions seemed to be correct, I was still getting errors like this in my logs every time I loaded a page, and the debug bar would not show up at all:

local.ERROR: Debugbar exception: File does not exist at path /srv/storage/debugbar/01JP802XSAAA12MP4PZMARSPZD.json

Solution

Eventually I was able to solve it by following these steps:

  1. Run php artisan vendor:publish and choose Barryvdh\Debugbar\ServiceProvider to publish the config/debugbar.php file (if not already published)
  2. Modify the storage.driver section from 'file' to 'redis'
  3. Load a page and see the debugbar show up correctly
  4. Reset the driver to 'file'
  5. Reload the page and see the debugbar show up correctly
  6. Delete the config/debugbar.php file (if no other customizations are needed)

Setting and Testing Cookies in a Livewire Component

I had a need today for a Livewire component to set a cookie, and wanted to test that it was actually set correctly.

Livewire includes support for reading cookies, but not for writing them.

And unfortunately, the redirect helper method doesn’t include any way to set a cookie.

Thankfully, Laravel provides a Cookie::queue() method that will attach set the cookie on the next outgoing response, and since Livewire method calls result in a HTTP response (unless you use the renderless attribute), the framework takes care of attaching the cookie for you:

Cookie::queue('name', 'value', $minutes);

However, I found it counterintuitive to test this behavior.

There is an assertCookie() method available when testing the component, but it always fails because we’re testing a Livewire component, not a request, and so the framework doesn’t attach the queued cookie(s).

My solution: use Cookie::queued() to retrieve the queued cookie, and then run assertions against that:

Laravel Queues Cancel Batch Package

Ever needed to cancel a specific batch of queued jobs? Neither the framework nor Laravel Horizon provide an easy way to do this.

Introducing the macbookandrew/laravel-queue-cancel-batch package: you can run php artisan queue:cancel-batch and it will ask you which of the current batches you wish to cancel.

Find more here.

Laravel 11, Pennant, and Conditional Scheduled Jobs

TL;DR: pass a function instead of Feature::active(…) to your console jobs’ ->when(…) method.

I recently upgraded an app from Laravel 10 to Laravel 11 (I know, I know…I’m a few months behind).

This app was using Laravel Pennant to conditionally register some jobs, based on feature flags:

Schedule::job(GetNewOrders::class)
    ->when(Feature::active(GetOrders::class))
    ->everyFiveMinutes();

Laravel Shift moved these jobs from app/Console/Kernel.php to routes/console.php as expected for the new Laravel 11 structure.

However, for every test that I ran, I was getting these errors:

Base table or view not found: 1146 Table 'test_database.features' doesn't exist

I spent a bit of time troubleshooting and verifying that the migration existed, the database schema was squashed, etc. I was expecting it to fail during the setUpTraits() step of booting the framework, but it actually failed while booting the application. I stepped through more of the setup steps (thanks, xdebug!) and realized that it failed while discovering commands.

That prompted me to comment out the ->when(Feature::active(…)) lines, and voila! my tests suddenly worked!

The when() method accepts either a boolean or a Closure, so I tried wrapping the feature flag in a closure, and my tests still worked:

Schedule::job(GetNewOrders::class)
    ->when(fn () => Feature::active(GetOrders::class))
    ->everyFiveMinutes();

It appears that if your scheduled job ->when(…) conditions depend on the database, you’ll want to wrap them in a function so they aren’t evaluated until they’re actually needed, after the database has already been set up.

Laravel Herd Pro MySQL and WordPress Database Connection Error

If you are using Laravel Herd Pro with a MySQL database, you may run into the “Error establishing a database connection” error.

To fix this, change your database host settings from define('DB_HOST', 'localhost'); to define('DB_HOST', '127.0.0.1'); and that should do the trick.

This can also fix errors from other software that attempts to connect using a socket (e.g., “Can’t connect to local MySQL server through socket ‘/tmp/mysql.sock’”).

Migrating from Homebrew MySQL to Laravel Herd Pro Services

Laravel Herd is an amazing application designed to make it as easy as possible to get started with PHP development.

I have been using Homebrew to run MySQL, Redis, Meilisearch, and more, but since we recently got Herd Pro, I figured it made sense to consolidate.

Here are a few other articles on how to migrate databases to Herd Pro:

My method is a combination of the two: copy the data files from the Homebrew MySQL to Herd Pro to save the time that it would take to dump and import.

Note: this only works if your homebrew MySQL and Herd Pro MySQL are on the same minor version (8.0.1 to 8.0.3 would work; 8.0.x to 8.4.x would not).

  1. Stop the homebrew MySQL service, if you haven’t already: brew services stop mysql (or maybe [email protected] if you’ve updated in the past few months)
  2. Stop the Herd MySQL service, if you haven’t already, using the Herd services UI
  3. Find the Herd data directory: right-click on the MySQL service and choose “Open data directory”
  4. Copy or move the files to retain a backup
  5. Find the homebrew data directory: in a terminal, run open $(brew --prefix)/var/mysql to open the directory in Finder
  6. Copy the files to the Herd data directory
  7. Restart Herd
  8. After you’ve confirmed everything is fine, maybe delete the homebrew mysql data directory and brew uninstall [email protected]

Laracon 2024 Recap

I had the privilege of going to Laracon this past week and thoroughly enjoyed both the talks and hanging out with people I previously knew only online.

There are enough other articles about the announcements, so I won’t really recap them too much, but wanted to note some of my thoughts and reactions for each.

Individual Posts

Overall

Overall, I loved the chance to hang out with and meet other Laravel developers. I got to meet several friends that I only knew online, and got to meet a bunch of new people as well.

Laracon 2024: Jess Archer: Analyzing Analytical Databases

Jess Archer taught attendees about analytical databases and how they compare to other more traditional databases.

I think this is the talk that taught me the most of the entire conference.

Definitions

  • OLTP (Online Transaction Processing): MySQL, PostgreSQL, SQLite, etc.
  • OLAP (Online Analytical Processing): SingleStore, ClickHouse, etc.

Her preference is Clickhouse; it’s free and open-source, and has excellent documentation and performance.

Comparisons

OLTP databases tend to be row-oriented and store data on disk with each row’s index.

OLAP databases tend to be column-oriented, and store each column of data together, making it much more performant to run queries like AVERAGE(), SUM(), etc., as it only has to open a single file instead of reading the entire database like an OLTP database would.

She had downloaded a dump (22GB compressed) of all Stack Overflow posts and imported it into a MySQL database and a Clickhouse database to run queries live on stage.

It could take 5–6 seconds to load an average view count using MySQL, and 27.5ms using Clickhouse.

What’s the catch?

At least for Clickhouse, the ID field is not unique, meaning that you could have multiple rows with the same ID, and that selecting a row by ID requires a full table scan (using LIMIT 1 can help by “bailing out” once a match has been found).

Ordering: the table structure should be designed close to what the typical query needs, to prevent extra reads from disk.

Inserts: bulk inserts are optimal, rather than single-row inserts

  • Each individual insert creates a “part” or folder on disk
  • The database engine will eventually merge and compact them (see the MergeTree engine)
  • The async_insert feature can also help

Updates: ideally, data is immutable so the engine doesn’t have to rewrite an entire file on disk

Deletes: can be optimized and automated; there’s a marker that indicates a row has been deleted, and at some point the engine will compact the files and remove those

Other Notes about Clickhouse

The LowCardinality field: similar to an enum, but better; it creates a dictionary of values.

The ReplacingMergeTree engine: inserting and updating an entry results in two entries on disk until the engine compacts the files; this engine provides a final keyword that resolves this automatically during queries.

Clickhouse can also easily fill gaps in time series data, while this would be more complicated using other database engines.

Packages

She mentioned these packages for using ClickHouse in a Laravel application:

A week later, I’m still thinking about this talk and how we could use Clickhouse to provide better features and performance for some of our clients.