Ever needed to cancel a specific batch of queued jobs? Neither the framework nor Laravel Horizon provide an easy way to do this.
Introducing the macbookandrew/laravel-queue-cancel-batch package: you can run php artisan queue:cancel-batch and it will ask you which of the current batches you wish to cancel.
Laravel Shift moved these jobs from app/Console/Kernel.php to routes/console.php as expected for the new Laravel 11 structure.
However, for every test that I ran, I was getting these errors:
Base table or view not found: 1146 Table 'test_database.features' doesn't exist
I spent a bit of time troubleshooting and verifying that the migration existed, the database schema was squashed, etc. I was expecting it to fail during the setUpTraits() step of booting the framework, but it actually failed while booting the application. I stepped through more of the setup steps (thanks, xdebug!) and realized that it failed while discovering commands.
That prompted me to comment out the ->when(Feature::active(…)) lines, and voila! my tests suddenly worked!
The when() method accepts either a boolean or a Closure, so I tried wrapping the feature flag in a closure, and my tests still worked:
It appears that if your scheduled job ->when(…) conditions depend on the database, you’ll want to wrap them in a function so they aren’t evaluated until they’re actually needed, after the database has already been set up.
If you are using Laravel Herd Pro with a MySQL database, you may run into the “Error establishing a database connection” error.
To fix this, change your database host settings from define('DB_HOST', 'localhost'); to define('DB_HOST', '127.0.0.1'); and that should do the trick.
This can also fix errors from other software that attempts to connect using a socket (e.g., “Can’t connect to local MySQL server through socket ‘/tmp/mysql.sock’”).
My method is a combination of the two: copy the data files from the Homebrew MySQL to Herd Pro to save the time that it would take to dump and import.
Note: this only works if your homebrew MySQL and Herd Pro MySQL are on the same minor version (8.0.1 to 8.0.3 would work; 8.0.x to 8.4.x would not).
Stop the homebrew MySQL service, if you haven’t already: brew services stop mysql (or maybe [email protected] if you’ve updated in the past few months)
Stop the Herd MySQL service, if you haven’t already, using the Herd services UI
Find the Herd data directory: right-click on the MySQL service and choose “Open data directory”
Copy or move the files to retain a backup
Find the homebrew data directory: in a terminal, run open $(brew --prefix)/var/mysql to open the directory in Finder
Copy the files to the Herd data directory
Restart Herd
After you’ve confirmed everything is fine, maybe delete the homebrew mysql data directory and brew uninstall [email protected]
I had the privilege of going to Laracon this past week and thoroughly enjoyed both the talks and hanging out with people I previously knew only online.
There are enough other articles about the announcements, so I won’t really recap them too much, but wanted to note some of my thoughts and reactions for each.
Individual Posts
Overall
Overall, I loved the chance to hang out with and meet other Laravel developers. I got to meet several friends that I only knew online, and got to meet a bunch of new people as well.
Jess Archer taught attendees about analytical databases and how they compare to other more traditional databases.
I think this is the talk that taught me the most of the entire conference.
Definitions
OLTP (Online Transaction Processing): MySQL, PostgreSQL, SQLite, etc.
OLAP (Online Analytical Processing): SingleStore, ClickHouse, etc.
Her preference is Clickhouse; it’s free and open-source, and has excellent documentation and performance.
Comparisons
OLTP databases tend to be row-oriented and store data on disk with each row’s index.
OLAP databases tend to be column-oriented, and store each column of data together, making it much more performant to run queries like AVERAGE(), SUM(), etc., as it only has to open a single file instead of reading the entire database like an OLTP database would.
She had downloaded a dump (22GB compressed) of all Stack Overflow posts and imported it into a MySQL database and a Clickhouse database to run queries live on stage.
It could take 5–6 seconds to load an average view count using MySQL, and 27.5ms using Clickhouse.
What’s the catch?
At least for Clickhouse, the ID field is not unique, meaning that you could have multiple rows with the same ID, and that selecting a row by ID requires a full table scan (using LIMIT 1 can help by “bailing out” once a match has been found).
Ordering: the table structure should be designed close to what the typical query needs, to prevent extra reads from disk.
Inserts: bulk inserts are optimal, rather than single-row inserts
Each individual insert creates a “part” or folder on disk
The database engine will eventually merge and compact them (see the MergeTree engine)
The async_insert feature can also help
Updates: ideally, data is immutable so the engine doesn’t have to rewrite an entire file on disk
Deletes: can be optimized and automated; there’s a marker that indicates a row has been deleted, and at some point the engine will compact the files and remove those
Other Notes about Clickhouse
The LowCardinality field: similar to an enum, but better; it creates a dictionary of values.
The ReplacingMergeTree engine: inserting and updating an entry results in two entries on disk until the engine compacts the files; this engine provides a final keyword that resolves this automatically during queries.
Clickhouse can also easily fill gaps in time series data, while this would be more complicated using other database engines.
Packages
She mentioned these packages for using ClickHouse in a Laravel application:
Joe Dixon explained how Laravel Reverb works using websockets to broadcast data to clients. It is very performant; he said that Laravel has just a single server handling thousands of connections for Forge and other products, including the upcoming Laravel Cloud.
The he provided an impressive demo: he showed a Nintendo Switch that he designed using TailwindCSS, and proceeded to fly a drone using Laravel Reverb to control it.
As if that weren’t enough, he showed how he could receive live telemetry data back from the drone (speed, altitude, temperature, and battery level) and display it on-screen. And then he turned on the camera, showing a live view of the audience!
I’ve been itching to try Reverb, and I have a couple of immediate uses for it…I just haven’t had the time yet!
Kapehe Sevilleja gave an inspiring talk showing a timeline and contrasts between her story and the history of Laravel and the community.
After a number of bad experiences at work, she enrolled in a coding bootcamp and later started working at Sanity.io. Her husband started using Laravel again and introduced her to it and the community.
Kapehe explained how she felt so welcomed by people in the Laravel community, and challenged us to think about how we can “build a good village” and “grow another’s flame” by creating an atmosphere of friendliness and belonging.
It is definitely worth watching when released. I think this was one of my favorite talks of the conference.
Laravel Query Builder: a package to easily sort, filter, and query Eloquent models based on request parameters
Laravel Login Link: a local development helper to quickly log in without using the username and password
Laravel Error Solutions: another development helper that provides suggested solutions on Laravel’s error pages
Laravel Blade Comments: yet another development helper that adds HTML comments indicating which Blade components are responsible for rendering parts of the page
Laravel PDF: a package to create PDF files in a Laravel app, using Browsershot + Chromium
Laravel Schedule Monitor: a utility to monitor scheduled commands to determine whether they succeeded or failed, when they last ran, etc.
Seb Armand told some battle stories of how they have approached scaling Laravel at Square, one of the largest payment processors.
Reducing database load: eager-loading queries, using Elasticache, developing Tag Tree Cache to cache multiple levels and recursively flush the relevant caches
Reducing bandwidth: using CDNs to move assets closer to end users
Reducing processing: using queues and deduplication
Further reducing processing: using batches and pipelines, and buffering/bundling tasks together