Migrating from Homebrew MySQL to Laravel Herd Pro Services

Laravel Herd is an amazing application designed to make it as easy as possible to get started with PHP development.

I have been using Homebrew to run MySQL, Redis, Meilisearch, and more, but since we recently got Herd Pro, I figured it made sense to consolidate.

Here are a few other articles on how to migrate databases to Herd Pro:

My method is a combination of the two: copy the data files from the Homebrew MySQL to Herd Pro to save the time that it would take to dump and import.

Note: this only works if your homebrew MySQL and Herd Pro MySQL are on the same minor version (8.0.1 to 8.0.3 would work; 8.0.x to 8.4.x would not).

  1. Stop the homebrew MySQL service, if you haven’t already: brew services stop mysql (or maybe [email protected] if you’ve updated in the past few months)
  2. Stop the Herd MySQL service, if you haven’t already, using the Herd services UI
  3. Find the Herd data directory: right-click on the MySQL service and choose “Open data directory”
  4. Copy or move the files to retain a backup
  5. Find the homebrew data directory: in a terminal, run open $(brew --prefix)/var/mysql to open the directory in Finder
  6. Copy the files to the Herd data directory
  7. Restart Herd
  8. After you’ve confirmed everything is fine, maybe delete the homebrew mysql data directory and brew uninstall [email protected]

Laracon 2024 Recap

I had the privilege of going to Laracon this past week and thoroughly enjoyed both the talks and hanging out with people I previously knew only online.

There are enough other articles about the announcements, so I won’t really recap them too much, but wanted to note some of my thoughts and reactions for each.

Individual Posts

Overall

Overall, I loved the chance to hang out with and meet other Laravel developers. I got to meet several friends that I only knew online, and got to meet a bunch of new people as well.

Laracon 2024: Jess Archer: Analyzing Analytical Databases

Jess Archer taught attendees about analytical databases and how they compare to other more traditional databases.

I think this is the talk that taught me the most of the entire conference.

Definitions

  • OLTP (Online Transaction Processing): MySQL, PostgreSQL, SQLite, etc.
  • OLAP (Online Analytical Processing): SingleStore, ClickHouse, etc.

Her preference is Clickhouse; it’s free and open-source, and has excellent documentation and performance.

Comparisons

OLTP databases tend to be row-oriented and store data on disk with each row’s index.

OLAP databases tend to be column-oriented, and store each column of data together, making it much more performant to run queries like AVERAGE(), SUM(), etc., as it only has to open a single file instead of reading the entire database like an OLTP database would.

She had downloaded a dump (22GB compressed) of all Stack Overflow posts and imported it into a MySQL database and a Clickhouse database to run queries live on stage.

It could take 5–6 seconds to load an average view count using MySQL, and 27.5ms using Clickhouse.

What’s the catch?

At least for Clickhouse, the ID field is not unique, meaning that you could have multiple rows with the same ID, and that selecting a row by ID requires a full table scan (using LIMIT 1 can help by “bailing out” once a match has been found).

Ordering: the table structure should be designed close to what the typical query needs, to prevent extra reads from disk.

Inserts: bulk inserts are optimal, rather than single-row inserts

  • Each individual insert creates a “part” or folder on disk
  • The database engine will eventually merge and compact them (see the MergeTree engine)
  • The async_insert feature can also help

Updates: ideally, data is immutable so the engine doesn’t have to rewrite an entire file on disk

Deletes: can be optimized and automated; there’s a marker that indicates a row has been deleted, and at some point the engine will compact the files and remove those

Other Notes about Clickhouse

The LowCardinality field: similar to an enum, but better; it creates a dictionary of values.

The ReplacingMergeTree engine: inserting and updating an entry results in two entries on disk until the engine compacts the files; this engine provides a final keyword that resolves this automatically during queries.

Clickhouse can also easily fill gaps in time series data, while this would be more complicated using other database engines.

Packages

She mentioned these packages for using ClickHouse in a Laravel application:

A week later, I’m still thinking about this talk and how we could use Clickhouse to provide better features and performance for some of our clients.

Laracon 2024: Joe Dixon: Learn to Fly with Laravel Reverb

Joe Dixon explained how Laravel Reverb works using websockets to broadcast data to clients. It is very performant; he said that Laravel has just a single server handling thousands of connections for Forge and other products, including the upcoming Laravel Cloud.

The he provided an impressive demo: he showed a Nintendo Switch that he designed using TailwindCSS, and proceeded to fly a drone using Laravel Reverb to control it.

As if that weren’t enough, he showed how he could receive live telemetry data back from the drone (speed, altitude, temperature, and battery level) and display it on-screen. And then he turned on the camera, showing a live view of the audience!

I’ve been itching to try Reverb, and I have a couple of immediate uses for it…I just haven’t had the time yet!

Laracon 2024: Seb Armand: Scaling Laravel at Square

Seb Armand told some battle stories of how they have approached scaling Laravel at Square, one of the largest payment processors.

  • Reducing database load: eager-loading queries, using Elasticache, developing Tag Tree Cache to cache multiple levels and recursively flush the relevant caches
  • Reducing bandwidth: using CDNs to move assets closer to end users
  • Reducing processing: using queues and deduplication
  • Further reducing processing: using batches and pipelines, and buffering/bundling tasks together

Laracon 2024: Caleb Porzio: Livewire Keynote

Caleb’s talk began with an unfortunate 30-minute delay caused by technical difficulties. He and Aaron Francis spent a bit of time entertaining the audience, and then Caleb took questions and answers from the audience until the equipment was ready.

He announced Flux, the official Livewire component library.

Under the hood, it uses a lot of web components, and everything he showed looked very well-designed and thought out.

I’m slightly hesitant to jump in and start using it, because I just recently spent not-insignificant time replacing another form component library that is no longer supported.

Because Caleb is charging money for this, I suspect it will remain a viable business and be supported longer than the open-source one I had used, but I still can’t bring myself to be as gung-ho about it as I’d like to be.

Laracon 2024: Taylor Otwell: Laravel Keynote

Taylor’s keynote is always the highlight of the conference, and this year was no exception.

Here’s the video; below are a few of the highlights and what I like about them.

First-Party VS Code Extension

Coming later this fall is a new VS Code Extension developed and maintained by Laravel.

It will offer autocomplete for a number of features (config and env values, app services, translations, views, Inertia props, and Eloquent), along with click-through links and preview-on-hover.

Probably my favorite feature is the Test Explorer integration: whether you’re using PHPUnit or Pest, your tests will show up in the sidebar, where you can run and see the status; you can also see test failure details in a “peek” UI widget.

This sounds like it will replace several of the extensions I currently use, and I’m excited to have first-party support for Laravel development in VS Code.

Container Attributes

A Container Attributes feature feels like dependency injection for config values or other attributes. This doesn’t seem like a huge feature, but will reduce some boilerplate code.

Chaperone Method

The just-released ->chaperone() method helps prevent n+1 queries when you need to do something like retrieve authors with their posts and then access something on the author model for each post.

I know I’ve run into this before, doing something like this to prevent the n+1 queries: $authors->with('posts.author');

defer() helper and Cache::flexible()

This looks very promising. It’s a simple way to push some work to the background, running it after the server has responded to the incoming request. I can immediately think of several apps where I have a super-simple job just to run something asynchronously after a request, and this can replace those.

Some potential use cases are for sending analytics, notifying third-party services, or any other “fire-and-forget” interaction.

This also enables a new Cache::flexible() method, providing a stale-while-revalidate mechanism to reduce the number of requests that hit a cold cache.

You provide two TTL numbers to this method:

  • The standard TTL indicating how long the cached value is valid
  • A second TTL indicating how long it’s acceptable to provide the stale value, while refreshing the value in the background so it’s ready for the next request

I’m very excited for this, as it will help improve several pain points in a couple of current applications.

Concurrency Facade

If some code needs to run multiple slow processes that don’t depend on each other, the new Concurrency facade will be helpful.

Concurrency::run([]) can be used to run multiple requests/jobs/etc. in parallel, returning the values for subsequent use; or Concurrency::defer([]) can run multiple processes in parallel after the current request, using the new defer() helper.

Inertia 2.0

Most of my apps are Livewire, but these features look amazing and I’ll definitely be using them:

  • Async requests: currently, Inertia runs only one request at a time; this allows multiple requests
  • Polling: I’ve manually written polling in several different components, and this will simplify and standardize those
  • WhenVisible: this will wait to load props until the user scrolls to a portion of the page that needs them, and then it loads each prop as it needs. I’ve almost written some code to do this, but it seemed too complex for the benefit, so found another solution instead. Having this available as a first-party solution will add definite performance improvements.
  • Infinite scrolling: not anything I’ve needed yet, but might be useful at some point
  • Prefetching: this stands to improve performance by optimistically loading data so it’s ready as soon as a user visits another page
  • Deferred props: another strategy for improving performance by waiting to load specific props until after the initial page has been rendered

Laravel Cloud

The big announcement: Laravel Cloud is a fully-managed infrastructure platform built specifically for hosting Laravel apps as simply as possible.

Taylor demonstrated the process of adding a new application, providing a repository, and deploying, all within 25 seconds!

It provides scalability both up and down; when not in use, it can be set to hibernate to save costs.

It allows you to create deployments from different branches, enabling an easy way to preview code in different branches before going live. (This is maybe my favorite feature.)

In the talk, he mentioned PostgreSQL as a serverless database option; later in a podcast interview with Matt Stauffer, he mentioned that MySQL is in the works for launch, but wasn’t quite complete in time for the talk.

Laravel Cloud will support creating multiple worker instances (separate from web instances) for handling queues.

It will provide SSL certificates and firewall using Cloudflare. He didn’t mention it in the talk, but in the podcast he did mention that Laravel Cloud runs on AWS, and another conference attendee said that Taylor told him it used Kubernetes.

Costs and Features

For sandbox:

  • No monthly fee
  • Compute: less than 1¢ per hour
  • Serverless Postgres: from 4¢ per hour plus 75¢ per GB
  • laravel.cloud domains included free

For production:

  • Costs not announced
  • Auto-scaling compute
  • Larger instance sizes
  • Custom domains included

Observability and debugging: wait until Laracon Australia!

Overall, Laravel Cloud seems like it dramatically lowers the barrier to entry for new developers, side projects, or anyone or any project that just wants to get something up and running with a low barrier to entry. I’m excited to see how this is going to change the ecosystem, as it makes it easier for people to focus on just building and shipping software.

Laracon 2024: Daniel Coulbourne: Verbs for Laravel

Daniel Coulbourne presented a very enlightening talk about Verbs, an event sourcing package that has been in the works for a while.

Earlier in the day, he and John Rudolph Drexler were walking around the conference with a ziploc bag containing $1,500 cash, signing up people to play their pyramid scheme game: the more people you recruited, the more votes you received.

During the talk, Daniel showed the leaderboard on screen and realized that at least two players had entered a single bonus code multiple times, gaining extra votes. He then merged a PR fixing the bug for future entries.

However, in a “normal” app storing the current state for each user, you would be hard-pressed to figure out how to remove the illegitimate points, while retaining legitimate ones.

That’s the beauty of event sourcing: since each vote and bonus code was stored an event, he was able to wipe out the database and re-run the events to determine the correct vote tallies.

(And of course, since he was doing this live, he accidentally wiped out the migrations table too and had to restore it…)

Once he re-processed the events, the leaderboard showed correct results, removing the extra votes from those users.

This was a powerful example of how event sourcing can be to prevent, catch, or clean up from certain types of bugs.

His agency, thunk.dev, also wrote up this blog post about the game overall.

Laracon 2024: Colin DeCarlo: Laravel and AI

Colin DeCarlo presented two AI-integrated applications that Vehikl recently built.

He went into some background about how LLMs work under the hood converting text to embeddings, and how to use embeddings in an Laravel app.

Documentation Helper

ai.vehikl.com is a documentation helper that Vehikl built for Laracon 2023, trained on Laravel, Vue, and Winter documentation.

They configured the LLM queries with a temperature of 0, instructing the model to not make up information when it doesn’t know the answer.

Chatbot

They also recently created a chat app providing a custom tools for the AI to use:

  • The app has an API integration with a weather API
  • When handling a chat request about the weather for an activity this evening, the app passes the user’s message to ChatGPT, along with a list of custom tools the app provides
  • ChatGPT interprets the request, realizes that it needs a custom tool to get the weather data, and then responds to the app with the name of the tool and the parameters to pass to it
  • The app calls the weather service, then returns that data back to ChatGPT along with a request or correlation ID (I didn’t take good notes on this part)
  • ChatGPT interprets the weather data and then writes a more natural-language chat message to send back to the user

I haven’t dived into LLMs much yet, but this approach of using custom tools seems like a pretty nice way of integrating other, more programmatic services along with AI tools.

Laracon 2024: Mateus Guimaraes: Behind Laravel Octane

Mateus Guimaraes brought a deep-dive through how Laravel Octane can massively improve performance of your apps.

Main benefits:

  • Reduced latency by eliminating the framework boot step on every request
  • Increased performance
  • Lower cost by reducing CPU usage

Aaron Francis asked a followup question about which driver is best:

None of the apps I’m currently working on need this level of performance (yet) but I’d be interested to try Octane to see how it could improve performance even now.

One more note: Octane can run multiple processes concurrently to save time during a request:

<?php

use Illuminate\Support\Facades\DB;
use Illuminate\Support\Facades\Route;
use Laravel\Octane\Facades\Octane;

Route:: get('foo', function () {
    Octane::concurrently([
        fn () => DB::select('SELECT * WHERE SLEEP(1)'),
        fn () => DB::select('SELECT * WHERE SLEEP(1)'),
        fn () => DB::select('SELECT * WHERE SLEEP(1)'),
    ]);

    return ['foo' => 'bar'];
});