Software Engineering in WordPress, PHP, and Backend Development

Category: Articles (Page 1 of 257)

Personal opinions and how-to’s that I’ve written both here and as contributions to other blogs.

Case Study: Building TuneLink.io For Matching Music Across Services (with AI)

The two music streaming services I often swap between are Spotify and Apple Music. I prefer Spotify for a number of different reasons, the main reason being I like its music discovery algorithm more than any other service.

You have your own for your reasons.

But every now and then, there’s that case when someone sends me a song and, if I like it, I want to add it to Spotify. Or maybe I know they use Apple Music so I just want to send them a link directly to that song so they don’t have to try to find it on their own.

I know this isn’t an actual problem – it’s a minor inconvenience at best.

As the software development industry has moved so quickly with AI over the last few months (let alone the last few years), minor inconveniences become opportunities for building programs to alleviate their frustration.

And that’s what I did with TuneLink.io. In this article, you can read both what the web app does and how I built it with the help of AI and my opinion on using AI to build something outside of my wheelhouse.


TuneLink.io: Algorithms, APIs, Tech Stack, and AI

As the homepage describes:

TuneLink allows you to easily find the same song across different music streaming services. Simply paste a link from Spotify or Apple Music, and TuneLink will give you the matching track on the other platform.

A few things off the top:

  • It only works between Spotify and Apple Music (that is, it doesn’t include any other streaming services),
  • It’s not an iOS app so there’s no share sheet for easily sharing a URL to this site,
  • I do not pay for an Apple Developer License so the methods I used to find match music from Spotify and Apple Music are as precise as possible without an API access.

This is something I built to solve an inconvenience for me that I’m sharing here. And if it helps, great. There are also some learnings around the tech stack that I share later in the article, too. Further, I discuss how AI played a part in building it and I share a few thoughts on the benefits thereof.

So if you’re interested in how a backend engineer moves to using front-end services and serverless hosting, this article has you covered.

Recall, the primary inconvenience I wanted to resolve was being able to share an accurate link to a song to a friend who’s using a different music service than I do.

Similarly, I want to be able to copy a URL from a message that I receive on my phone or my desktop, paste it into the input field, and then have it generate an application link to automatically open it in my preferred music service.

It does exactly that and only that and you can give it try, if you’re interested.

All things considered (that is the desired architecture, how I wanted it to work, and experience with a number of LLMs), it took very little time to build I’ve not bothered sharing the site with anyone else (mainly because it’s for me). That said, there is a GitHub repository available in which you can file issues, feature requests, pull requests, and all of the usual.

But, as of late, I’ve enjoyed reading how others in this field build these types of things, so I’m doing the same. It’s lengthy so if you’re only interested in the utility itself, you have the URL; otherwise, read on.


How I Built TuneLink.io (Algorithms, APIs, and AI)

Early, I said the key to being able to build something like this – as simple as it is – is accelerated by having several key concepts and levels of experience in place.

This includes, but is not limited to:

  • Being able to clearly articulate the problem within a prompt,
  • Forcing the LLM to ask you questions to clarify understanding and knowing how to articulate a clear response to it,
  • Knowing exactly how the algorithm should work at a high-level,
  • Getting the necessary API keys from the services needed and making sure you’re properly incorporating them into local env files and setting up gitignore properly so not to leak information where needed,
  • Having a plan for how you want the app to function,
  • Preparing the necessary hosting infrastructure for hosting,
  • And knowing certain underlying concepts that can help an LLM get “un-stuck” whenever you see it stating “Ah, I see the problem,” when it definitely does not, in fact, see the problem (as the kids say, iykyk).

Okay, with that laid as the foundation for how I approached this, here’s the breakdown of the algorithm, dependencies, APIs, and the tech stack used to build and deploy this.

And remember: All TuneLink is is a single-page web app that converts URLs from one music service to another and opens the track in the opposite music service.

The Algorithm

URL Analysis and Detection

The first step in the process is determining what information is available with which to work. When a user pastes a URL into TuneLink, the application needs to:

  1. Validate that the URL is properly formatted,
  2. Check the domain to identify the source platform,
  3. Extract the unique identifiers from the URL.

For example, Spotify URLs follow patterns like:

  • https://open.spotify.com/track/{track_id}
  • https://open.spotify.com/album/{album_id}/track/{track_id}

While Apple Music URLs look like:

  • https://music.apple.com/us/album/{album-name}/{album_id}?i={track_id}

The algorithm uses regular expressions to match these patterns and extract the critical identifiers. If the URL doesn’t match any known pattern, it returns an error asking for a valid music URL.

Extracting Track Information

Once the program has identified the platform and extracted the IDs, it needs to gather metadata about the track:

  1. For Spotify URLs: Query the Spotify Web API using the track_id
  2. For Apple Music URLs: Query the Apple Music/iTunes API using the track_id
  3. Extract the essential information: track name, artist name, album name

Since I’m not using an Apple Developer License, the iTunes API was easier to access as it doesn’t require any privileged data to access it.

This metadata becomes my search criteria for finding the equivalent track on the other platform. The more information I can extract, the better my chances of finding an accurate match. More specifically, there’s an interstitial API I used in conjunction with this information that I’ll discuss more in this article.

Cross-Platform Track Matching

Up to this point, the approach is easy enough. But this where it gets a more interesting. With the source track information now available, the program needs to find the same track on the target platform:

For Apple Music to Spotify conversion:

  1. Extract track name and artist from Apple Music
  2. Format a search query for the Spotify API: “{track_name} artist:{artist_name}”
  3. Send the search request to Spotify’s API
  4. Analyze the results to find the best match
  5. Create the Spotify URL from the matched track’s ID

For Spotify to Apple Music conversion:

  1. Extract track name and artist from Spotify
  2. Format a search query for the iTunes Search API: “{track_name} {artist_name}”
  3. Send the search request to iTunes API
  4. Filter results to only include songs from Apple Music
  5. Create the Apple Music URL from the matched track’s information

The matching algorithm uses several criteria to find the best result:

  • Exact matches on track name and artist (which obviously yields the highest confidence)
  • Fuzzy matching when exact matches aren’t found
  • Fallback matching using just the track name if artist matching fails
  • Duration comparison to ensure we’ve got the right version of a song

Following a fallback hierarchy like this proved to be useful especially when there are various versions of a song in either service. This may include something that was live, remastered during a certain year, performed live at Apple, performed live at Spotify, etc.

Ultimately, the goal is to get the closest possible track to the one available if the identical track cannot be found. And I talk about this a little more in-depth later in the article.

Result Caching and Optimization

To improve performance and reduce API calls, there’s also a system that does the following:

  1. Caches successful matches for frequently requested tracks
  2. Uses a tiered approach to searching (exact match first, then increasingly fuzzy searches)
  3. Handles common variations like remixes, live versions, and remastered tracks

This makes subsequent requests for the same track conversion nearly instantaneous.

The purpose here is not so much anticipating a lot of traffic but to simply gain experience in implementing a feature in a set of tools with which I’m less familiar.

In other words, this type of functionality is something commonly deployed in other systems I’m working on but I’ve not been exposed to it in the tech stack I’ve used to build TuneLink. This is a way to see how it’s done.

Error Handling and Fallbacks

This is another area where things became more challenging: Not all tracks exist on both platforms, so the algorithm needs to handle these cases gracefully.

As such, this is how the algorithm works:

  1. If no match is found, try searching with just the track name.
  2. If still no match, try searching with normalized track and artist names (removing special characters).
  3. If no match can be found, return a clear error message.
  4. Provide alternative track suggestions when possible.

Examples in which I saw this the most was when dealing with live tracks, remastered tracks, or platform-specific tracks (like Spotify Sessions).

The Full Algorithm

If you’re looking at this at a high-level or in a way in which you’d want to explain the algorithm using all of the details albeit at a high-level, it goes like this:

  1. Take the input URL from user
  2. Validate and parse URL to identify source platform
  3. Extract track ID and query source platform’s API for metadata
  4. Use metadata to search the target platform’s API
  5. Apply matching logic to find the best corresponding track
  6. Generate the target platform URL from the match results
  7. Return the matching URL to the user

After trying this out over several iterations, it become obvious that using only the Spotify and iTunes APIs was going to be insufficient. I needed a way to make sure the fallback mechanism would work consistently.

And that’s where a third-party API, MusicBrainz, helps to do the heavy lifting.

Matching Tracks with MusicBrainz

MusicBrainz is “an open music encyclopedia” that collects music metadata and makes it available to the public. In other words, it’s a Wikipedia for music information.

What makes it particularly valuable for TuneLink is:

  1. It maintains unique identifiers (MBIDs) for tracks, albums, and artists
  2. It provides rich metadata including alternate titles and release information
  3. It’s platform-agnostic, so it doesn’t favor either Spotify or Apple Music (or other platforms, for that matter).
  4. It has excellent coverage of both mainstream and independent music

It’s been really cool to see how the industry uses various pieces of metadata to identify songs and how we can leverage that when writing programs like this.

Integrating MusicBrainz in TuneLink

As far as the site’s architecture is concerned, think of MusicBrainz as an intermediary layer between Spotify and Apple Music. When using MusicBrainz, the program works like this:

  1. Extract track information from source platform (Spotify or Apple Music)
  2. Query MusicBrainz API with this information to find the canonical track entry
  3. Once we have the MusicBrainz ID, we can use it to search more accurately on the target platform

Using this service is what significantly improved matching between the two services because it provides more information than just the track name and the artist.

Edge Cases

MusicBrainz is particularly valuable for addressing challenging matching scenarios:

  • Multiple versions of the same song. MusicBrainz helps distinguish between album versions, radio edits, extended mixes, etc.
  • Compilation appearances. When a track appears on multiple albums, MusicBrainz helps identify the canonical version
  • Artist name variations. MusicBrainz maintains relationships between different artist names (e.g., solo work vs. band appearances)
  • International releases. MusicBrainz tracks regional variations of the same content

Even still, when there isn’t a one-to-one match, it’s almost always a sure bet to fallback to the studio recorded version of a track.

Fallbacks

To handle the case of when there isn’t a one-to-one match, this is the approach taken when looking to match tracks:

  1. First attempt. Direct MusicBrainz lookup using ISRCs (International Standard Recording Codes) when available
  2. Second attempt. MusicBrainz search using track and artist name
  3. Fallback. Direct API search on target platform if MusicBrainz doesn’t yield results

Clearly, I talked about Error Handling and Fallbacks earlier in the article. Incorporating this additional layer made results that more robust.

API Optimization

To keep TuneLink responsive, I implemented several optimizations for MusicBrainz API usage:

  • Caching. I cache MusicBrainz responses to reduce redundant API calls.
  • Rate Limiting. I carefully manage the query rate to respect MusicBrainz’s usage policies.
  • Batch Processing. Where possible, I group queries to minimize API calls.

Using MusicBrainz as the matching engine creates a more robust and accurate system than would be possible with direct platform-to-platform searches alone.

This approach has been key to delivering reliable results, especially for more obscure tracks or those with complex release histories.

The Tech Stack

The primary goal of the TuneLink site was to have a single page, responsive web application that I could quickly load on my phone or my desktop and that made deployments trivially easy (and free, if possible).

Frontend Technology

TuneLink is built on a modern JavaScript stack:

  • Next.js 15. The React framework that provides server-side rendering, API routes, and optimized builds
  • React 19. For building the user interface components
  • TypeScript. For type safety and improved developer experience
  • Tailwind CSS. For styling the application using utility classes
  • Zod. For runtime validation of data schemas and type safety

This combination gave the performance benefits of server-side rendering while maintaining the dynamic user experience of a single-page application.

Backend Services

The backend of TuneLink leverages several APIs and services:

  • Next.js API Routes. Serverless functions that handle the conversion requests
  • MusicBrainz API. The primary engine for canonical music metadata and track matching
  • Spotify Web API. For accessing Spotify’s track database and metadata
  • iTunes/Apple Music API. For searching and retrieving Apple Music track information
  • Music Matcher Service. A custom service I built to orchestrate the matching logic between platforms. Specifically, this is the service that communicates back and forth from the music streaming services and MusicBrainz.

Testing and QA

To ensure reliability, TuneLink includes:

  • Jest. For unit and integration testing
  • Testing Library. For component testing
  • Mock Service Worker. For simulating API responses during testing

Hosting and Infrastructure

TuneLink is hosted on a fully serverless stack:

  • Vercel. For hosting the Next.js application and API routes
  • Edge Caching. To improve response times for frequently requested conversions
  • Serverless Functions. For handling the conversion logic without maintaining servers

This serverless approach means TuneLink can scale automatically based on demand without requiring manual infrastructure management. Of course, an application of this size has little-to-no demand – this was more of a move to becoming more familiar with Vercel, deployments, and their services.

And for those of you who have historically read this blog because of the content on WordPress, but are are interested in or appreciate the convenience of Vercel, I highly recommend you take a look at Ymir by Carl Alexander. It’s serverless hosting but tailored to WordPress.

Development Environment

For local development, I use:

  • ESLint/TypeScript. For code quality and type checking
  • npm. For package management
  • Next.js Development Server. With hot module reloading for quick iteration

Why This Stack Over Others?

I chose this technology stack for several reasons:

  1. Performance. Next.js provides excellent performance out of the box
  2. Developer Experience. TypeScript and modern tooling improve code quality
  3. Scalability. The serverless architecture handles traffic spikes efficiently
  4. Maintainability. Strong typing and testing make the codebase more maintainable
  5. Cost-Effectiveness. Serverless hosting means we only pay for what we use

This combination of technologies allows TuneLink to deliver a fast, reliable service while keeping the codebase clean and maintainable. The serverless architecture also means zero infrastructure management, letting me focus on improving the core matching algorithm instead of worrying about servers.

Conclusion

The whole vibe coding movement is something to – what would you say? – behold, if nothing else, and there’s plenty of discussions happening around how all of this technology is going to affect the job economy across the board.

This is not the post nor am I the person to talk about that.

In no particular no order, these are the things that I’ve found to be most useful when working with AI and building programs (between work and side projects, there are other things I can – and may – discuss in future articles):

  • I know each developer seems to have their favorite LLM but Sonnet 3.7 has been and continues to be my preferred weapon of choice. It’s worked well across standard backend tools with PHP, has done well assisting in programs with Python, and obviously with what you see above.
  • The more explicit and almost demanding of what you can be with the LLM, the better. Don’t let it assume or attempt to anything without explicit approval and sign off.
  • Having a deeper understanding of computer science, software development, and engineering concepts in helpful primarily because it helps to avoid common problems that you may encounter when building for the web.
  • Thinking through algorithms, data structures, rate limits, scaling, and so on is helpful when prompting the LLM to generate certain features.
  • There are times when an attempt at a one-shot for a solution is fine, there are times when an attempt to one-shot a feature is better. I find that intuition helps drive this depending on the context in which you’re working, the program you’re trying to write, and the level of experience you have with the stack with which you’re working.
  • Remembering to generate tests for the features you’re working on and/or are refining should not be after thoughts. In my experience, even if an LLM generates subpar code, it does a good job writing tests that match your requirements which can, in turn, help to refine the quality of the feature in quest.
  • Regardless of if I’m working with a set of technologies with which I’m familiar or working with something on which I’m cutting my teeth, making sure that I’m integrating tests against the new features has been incredibly helpful in more than one occasion for ensuring the feature does what it’s supposed to do (and it helps to catch edge cases and “what about if the user does this?“). As convenient as LLMs are getting, they aren’t going to be acting like rogue humans. I’m think there’s a case to be made they often don’t act like highly skilled humans, either. But they’re extremely helpful.

This isn’t a comprehensive list and I think the development community, as a whole, is a doing a good job of sharing all of their learnings, their opinions, their hot takes, and all of that jazz.

I’ve no interest in making any type of statement that can be any type of take nor offering any quip that would fall under “thought leadership.” At this point, I’m primarily interested and concerned with how AIs can assist us and how we can interface with them in a way that forces them to work with us so we, in turn, are more efficient.

Ultimately, my goal is to share how I’ve used AI in an attempt to build something that I wanted and give a case study for exactly how it went. I could write much more about the overall approach and experience but there are other projects I’ve worked on and I am working on that lend themselves to this. Depending on how this is received, maybe I’ll write more.

If you’ve made it this far, I hope it’s been helpful and it’s help to cut through a lot of the commentary on AI and given a practical look and how it was used in a project. If I ever revisit TuneLink in a substantial way, I’ll be sure to publish more about it.

Move Fast but Understand Things

In Beware of the Makefile Effect, the author defines the phrase as such:

Tools of a certain complexity or routine unfamiliarity are not run de novo, but are instead copy-pasted and tweaked from previous known-good examples.

If you read the article, you’ll see that there are a number of examples given as to what is meant by the phrase.

Originally, makefiles were files used for C (or C++) build tools to help assemble a program. This is not unlike:

Just as developers have long been susceptible to the ‘Makefile Effect’ when it comes to configuration files, the rise of generative AI tools brings a new risk of compounding a lack of understanding. Like copy-pasting Makefiles, using AI-generated code without fully following how it works can lead to unintended consequences.

Though it absolutely helps us move faster in building The Thing™️, it’s worth noting: Many of these configuration files are the result of taking a working version and copying and pasting them into our project, tweaking a few things until it works, and then deploying it.

As it currently stands, we may not be copying and pasting pre-existing files, but generative AI may be close enough (if not a step further): It produces what we need and, if it doesn’t work, we can just tell it to keep tweaking the script based on whatever error is returned until we have something that works.

It’s obviously not limited to configuration files, either. Let’s include functions, classes, libraries, or full programs.

Again, the advantage this gives us now versus just a few years ago is great but failure to understand what’s being produced has compounding effects.

To that end, prompt the LLM to explain what each line or block or function is actually doing and then consider adding comments in your own words to explain it. This way, future you, or someone else, will have that much more context available (versus needing to feed the code back into an LLM for explanation) whenever the code is revisited.

Perhaps this will help to resist the makefile affect as well as a lack of understanding as to whatever code is being produced and ultimately maintained.

Strategies for Locally Developing Google Cloud Functions

For the last few months, I’ve been doing a lot of work with Google Cloud Functions (along with a set of their other tools such as Cloud Storage, PubSub, and Cloud Jobs, etc.).

The ability to build systems on top of Google’s Cloud Platform (or GCP) is great. Though the service has a small learning curve in terms of getting familiar with how to use it, the UI looks exactly like what you’d expect from a team of developers responsible for creating such a product.

An Aside on UIs

Remember how UIs used to look in the late 90s and early 00s? The joke was something like “How this application would look when designed by a programmer.”

UX Planet has a comic that captures this:

If developers were responsible for UIs.

I can’t help but think of this whenever I am working in the Google Cloud Platform: Extremely powerful utilities with a UI that was clearly designed by the same types of people who would use it.

All that aside, the documentation is pretty good – using Gemini to work with it is better – and they offer a CLI which makes dealing with the various systems much easier.

With all of that commentary aside, there are a few things I’ve found to be useful in each project in which I’m involved when they utilize features of GCP.

Specifically, if you’re working with Google’s Cloud Platform and are using PHP (I favor PHP 8.2 but to each their own, I guess), here are some things that I use in each project to make sure I can focus on solving the problem at hand without navigating too much overhead in setting up a project.


Locally Developing Google Cloud Functions

Prerequisites

  • The gcloud CLI. This is the command-line tool provided by Google for interacting with Google Cloud Platform. The difference in this and the rest of the packages is that this is a utility to connect your system to Google’s infrastructure. The rest of the packages I’m listing on PHP libraries.
  • vlucas/hpdotenv. I use this package to maintain a local copy of environmental variables in a .env file. This is used to work as a local substitute for anything I store in Google Secrets Manager.
  • google/cloud-functions-framework. This is the Google-maintained library for interacting with Cloud Functions. It’s what gives us the ability to work with Google Cloud-based function locally while also deploying code to our Google Cloud project.
  • google/cloud-storage. Not every project will serialize data to Google Cloud Storage, but this package is what allows us to read and write data to Google Cloud Storage buckets. It allows us to write to buckets from our local machines just as if it were a cloud function.
  • google/cloud-pubsub. This is the library I use to publish and subscribe to messages when writing to Google’s messaging system. It’s ideal for queuing up messages and then processing them asynchronously.

Organization

Though we’re free to organize code however we like, I’ve developed enough GCP-based solutions that I have a specific way that I like to organize my project directories so there’s parity between what my team and I will see whenever we login to GCP.

It’s simple: The top level directory is named the same as the Google Cloud Project. Each subdirectory represents a single Google Cloud Function.

So if I have a cloud project called acme-cloud-functions and then there are three functions contained in the project, then the structure make look something like this:

tm-cloud-functions/
├── process-user-info/
├── verify-certificate-expiration/
└── export-site-data/

This makes it easy to know what project in which I’m working and it makes it easy to work directly on a single Cloud Function by navigating to that subdirectory.

Further, those subdirectories are self-contained such that they maintain their own composer.json configuration, vendor directories, .env files for local environmental variables, and other function-specific dependencies, files, and code.

So the final structure of the directory looks something like this:

tm-cloud-functions/
├── process-user-info/
│   ├── src/
│   ├── vendor/
│   ├── index.php
│   ├── composer.json
│   ├── composer.lock
│   ├── .env
│   └── ...
├── verify-certificate-expiration/
│   ├── src/
│   ├── vendor/
│   ├── index.php
│   ├── composer.json
│   ├── composer.lock
│   ├── .env
│   └── ...
└── export-site-data/
    ├── src/
    ├── vendor/
    ├── index.php
    ├── composer.json
    ├── composer.lock
    ├── .env
    └── ...

Testing

Assuming the system has been authenticated with Google via the CLI application, testing the function is easy.

First, make sure you’re authenticated with the same Google account that has access to GCP:

$ gcloud auth login

The set the project ID equal to what’s in the GCP project:

$ gcloud config set project [PROJECT-ID]

Once done, verify the following is part the composer.json file:

"scripts": {
  "functions": "FUNCTION_TARGET=[main-function] php vendor/google/cloud-functions-framework/router.php",
  "deploy": [
    "..."
  ]
},

Specifically, for the scripts section of the composer.json file, add the functions command that will invoke the Google Cloud Functions library. This will then, in turn, allow you to run your local code as if you were writing it in the Google Cloud UI. And if there are errors, notices, warnings, etc., they’ll appear in the console.

To run your function locally, run the following command:

$ composer functions

Further, if you’ve got Xdebug installed, you can even step through your code. (And if you’re using Herd and Visual Studio Code, I’ve a guide for that.)

Deployment

Next, in composer.json, add the following line to your the deploy section as referenced above:

"deploy": [
  "gcloud functions deploy [function-name] --project=[project-id] --region=us-central1 --runtime=php82 --trigger-http --source=. --entry-point=[main-function]"
]

Make sure the following values are set:

  • function-name is the name of the Google Cloud Function set up in the GCP UI.
  • project-id is the same ID referenced earlier in the article.
  • main-function is whatever the entry point is for your Google Cloud Function. Oftentimes, Google’s boilerplate generates helloHttp or something similar. I prefer to use main.

Then, when you’ve tested your function and are ready to deploy it to GCP, you can run the following command:

$ composer deploy

This will take your code and all necessary assets, bundle it, and send it to GCP. This function can then be accessed based on however you’ve configured it (for example, using authenticated HTTP access).

Note: Much like .gitignore, if you’re looking to deploy code to Google Cloud Functions and want to prevent deploying certain files, you can use a
.gcloudignore
file.

Conclusion

Ultimately, there’s still provisioning that’s required on the web to set up certain aspects of a project. But once the general infrastructure is in place, it’s easy to start running everything locally from testing to deployment.

And, as demonstrated, it’s not limited to functions but also to working with Google Cloud Storage, PubSub, Secrets Manager, and other features.


Finally, props to Christoff and Ivelina for also providing some guidance along setting up some of this.

Review and Highlights of 2024

I usually don’t write a full “year in review” type of post, but I do sometimes highlight various milestones, goals, and/or notable things that have happened in the last year. And this year, I’ve both the desire and time to write about exactly that.

When drafting the last post, I re-read some of the posts I’d published in the past. While it’s fun to see how things evolve over the years, it also provides a guide for how to write these kinds of posts even when I feel out of the habit.

So here’s a summary of the highlights from this year.


Highlights of 2024

Most Popular Posts

Books

For the past couple of years, I’ve been trying to read two books simultaneously – one fiction and one non-fiction. I don’t participate in book clubs, I don’t try to accomplish a certain number of books per month (or year or whatever other unit of time), and I don’t always try to grab whatever the most recent best seller is.

Instead, I try to read the things that I want and that seem relevant, interesting, and/or helpful. I read a total of 20 books this year (10 fiction, 10 non-fiction).

Here are the things I enjoyed the most:

Omission from this list doesn’t mean that I didn’t like it or that it wasn’t something educational. I tried to limit this list to one book from each category but I couldn’t do it so I arbitrarily decided to include two from each instead.

Fitness

Over the years, I’ve tried to make exercise a consistent part of my day-to-day. On the whole, I’ve been good about it even though the type of fitness I do each year tends to change.

Some years, I’ve done nothing but run. Other years, I’ve incorporated some type of guided program. And there are other times where I’ve mixed it up between the two.

This year was kind of like the latter: I was running at least two-to-three 5Ks a week and lifting weights every other day. Unfortunately, I pinched a nerve in my back in September and that brought everything to a grinding halt.

I started walking every day once again in November but that’s about the extent of what I’m doing. My goal is to get back to both cardio and basic weight lifting in January, but we’ll see.

Lastly, if you workout and have an Apple Watch or an iPhone, I recommend Gentler Streak. It’s far an away my favorite fitness app primarily because it aims to keep you moving and in a healthy state without having you just blindly try to close your rings.

Music, TV, and Podcasts

My favorite music from 2024 include the following albums:

  • Moment of Truth by the Red Clay Strays (and their Live At The Ryman album is absolutely worth it, too). If there was a way to capture 50s rock and roll with 70s southern rock and timeless blues lyrics, this is the band.
  • Deeper Well by Kacey Musgraves. I’ve been a fan of hers for a longtime. Golden Hour is still my favorite by her and I haven’t really been a fan of anything sense, but Deeper Well is a bit of a return to form.
  • Rebel Diamonds by The Killers. This is more of a greatest hits collection but if you’ve never listened to the band or are looking to hear how their sound has changed over the year, it’s a good listen.
  • I started listening to Wild Rivers this year and am a fan of what I’ve heard so far. I can’t recommend any single album since most of their songs came up in a recommended playlist.

Most of the shows I watch during the year are whenever I’m on the treadmill or it’s the period between when the kids are done for the day and Meghan and I are still up.

  • Only Murders in the Building. I thoroughly enjoy Steve Martin and Martin Short’s comedy in this show (and Selena Gomez holds her own with them while also balancing them out). We’ve not watched the most recent season yet, but very much enjoy this show so far.
  • From. It’s hard to succinctly describe this show. If you’re into sci-fi horror, then read up on the premise on Wikipedia. It’s shame how much time passes between seasons, but that seems to be the norm in the age of streaming. I wish this show was available on a platform with a wider reach
  • Shrinking. I didn’t start watching this until October but am glad I did so much so that I watched it once through on my own then and immediately watched it through again with Meghan. If you’re a fan of Scrubs, you’ll likely love this show.

I was going to do a Music, Movies, and TV section but I can count the number of movies I watched this year on one hand so I’m mixing it up and adding the podcasts I enjoyed the most this year.

This is not an exhaustive list nor is my sharing this saying I’ve listened to every single episode (unless I mention it, obviously). But they are the ones that kept me coming back a few times a month.

To 2025

Since the majority of what I write here on a daily, weekly, monthly basis primarily has to do with my day-to-day, I try to cover anything outside of that in posts like this.

And these are the highlights for 2024. Like most, I have things that I’m planning to do in 2025 though I’ll wait until this time next year to share how everything went.

If anything the last couple of years has shown me, it’s that this stage of life – while great – has all kinds of ways for making it difficult to make concrete plans. So beyond the high-level goals of reading, working out, listening to music, and writing, there’s not much more to add.

Whatever it is you’ve planned for 2025, here’s to it all going well. And if not, here’s to having the fortitude to push through.

Merry Christmas and Happy Holidays 2024

Over the years, I’ve usually written some type of end of the year post centered around Christmas that also talks about what’s happening and what happened:

And the closest I came to doing something like this last year was an article about The Most Useful (Or Popular) Articles from 2023.

For the first set, it’s fun to look back at how things have changed, and for the latter, it’s neat to look back to see what caught attention over the last year.

These posts are the closest I get to the ‘end of the year’ type of posts and I’d like to eventually get one done for 2024 even if I don’t complete it before the start of the year.

For today, though, it’s a short post to say Merry Christmas and Happy Holidays.


Merry Christmas 2024

Whether or not you’re celebrating Christmas, Hanukkah, Boxing Day, something else, or nothing at all, may the week (or weekend) be good to you.

As for my family and me, we’re celebrating Christmas and spending time with extended family over the next few days.

It’s my favorite time of year and, as cliché, as it may sound, I dig spending it with those who are near-and-dear. And I think everyone should be so lucky.

With that, here’s to the end of the year and the beginning of the next.

« Older posts

© 2025 Tom McFarlin

Theme by Anders NorenUp ↑