Software Engineering in WordPress, PHP, and Backend Development

Tag: AI (Page 1 of 2)

Case Study: Building TuneLink.io For Matching Music Across Services (with AI)

The two music streaming services I often swap between are Spotify and Apple Music. I prefer Spotify for a number of different reasons, the main reason being I like its music discovery algorithm more than any other service.

You have your own for your reasons.

But every now and then, there’s that case when someone sends me a song and, if I like it, I want to add it to Spotify. Or maybe I know they use Apple Music so I just want to send them a link directly to that song so they don’t have to try to find it on their own.

I know this isn’t an actual problem – it’s a minor inconvenience at best.

As the software development industry has moved so quickly with AI over the last few months (let alone the last few years), minor inconveniences become opportunities for building programs to alleviate their frustration.

And that’s what I did with TuneLink.io. In this article, you can read both what the web app does and how I built it with the help of AI and my opinion on using AI to build something outside of my wheelhouse.


TuneLink.io: Algorithms, APIs, Tech Stack, and AI

As the homepage describes:

TuneLink allows you to easily find the same song across different music streaming services. Simply paste a link from Spotify or Apple Music, and TuneLink will give you the matching track on the other platform.

A few things off the top:

  • It only works between Spotify and Apple Music (that is, it doesn’t include any other streaming services),
  • It’s not an iOS app so there’s no share sheet for easily sharing a URL to this site,
  • I do not pay for an Apple Developer License so the methods I used to find match music from Spotify and Apple Music are as precise as possible without an API access.

This is something I built to solve an inconvenience for me that I’m sharing here. And if it helps, great. There are also some learnings around the tech stack that I share later in the article, too. Further, I discuss how AI played a part in building it and I share a few thoughts on the benefits thereof.

So if you’re interested in how a backend engineer moves to using front-end services and serverless hosting, this article has you covered.

Recall, the primary inconvenience I wanted to resolve was being able to share an accurate link to a song to a friend who’s using a different music service than I do.

Similarly, I want to be able to copy a URL from a message that I receive on my phone or my desktop, paste it into the input field, and then have it generate an application link to automatically open it in my preferred music service.

It does exactly that and only that and you can give it try, if you’re interested.

All things considered (that is the desired architecture, how I wanted it to work, and experience with a number of LLMs), it took very little time to build I’ve not bothered sharing the site with anyone else (mainly because it’s for me). That said, there is a GitHub repository available in which you can file issues, feature requests, pull requests, and all of the usual.

But, as of late, I’ve enjoyed reading how others in this field build these types of things, so I’m doing the same. It’s lengthy so if you’re only interested in the utility itself, you have the URL; otherwise, read on.


How I Built TuneLink.io (Algorithms, APIs, and AI)

Early, I said the key to being able to build something like this – as simple as it is – is accelerated by having several key concepts and levels of experience in place.

This includes, but is not limited to:

  • Being able to clearly articulate the problem within a prompt,
  • Forcing the LLM to ask you questions to clarify understanding and knowing how to articulate a clear response to it,
  • Knowing exactly how the algorithm should work at a high-level,
  • Getting the necessary API keys from the services needed and making sure you’re properly incorporating them into local env files and setting up gitignore properly so not to leak information where needed,
  • Having a plan for how you want the app to function,
  • Preparing the necessary hosting infrastructure for hosting,
  • And knowing certain underlying concepts that can help an LLM get “un-stuck” whenever you see it stating “Ah, I see the problem,” when it definitely does not, in fact, see the problem (as the kids say, iykyk).

Okay, with that laid as the foundation for how I approached this, here’s the breakdown of the algorithm, dependencies, APIs, and the tech stack used to build and deploy this.

And remember: All TuneLink is is a single-page web app that converts URLs from one music service to another and opens the track in the opposite music service.

The Algorithm

URL Analysis and Detection

The first step in the process is determining what information is available with which to work. When a user pastes a URL into TuneLink, the application needs to:

  1. Validate that the URL is properly formatted,
  2. Check the domain to identify the source platform,
  3. Extract the unique identifiers from the URL.

For example, Spotify URLs follow patterns like:

  • https://open.spotify.com/track/{track_id}
  • https://open.spotify.com/album/{album_id}/track/{track_id}

While Apple Music URLs look like:

  • https://music.apple.com/us/album/{album-name}/{album_id}?i={track_id}

The algorithm uses regular expressions to match these patterns and extract the critical identifiers. If the URL doesn’t match any known pattern, it returns an error asking for a valid music URL.

Extracting Track Information

Once the program has identified the platform and extracted the IDs, it needs to gather metadata about the track:

  1. For Spotify URLs: Query the Spotify Web API using the track_id
  2. For Apple Music URLs: Query the Apple Music/iTunes API using the track_id
  3. Extract the essential information: track name, artist name, album name

Since I’m not using an Apple Developer License, the iTunes API was easier to access as it doesn’t require any privileged data to access it.

This metadata becomes my search criteria for finding the equivalent track on the other platform. The more information I can extract, the better my chances of finding an accurate match. More specifically, there’s an interstitial API I used in conjunction with this information that I’ll discuss more in this article.

Cross-Platform Track Matching

Up to this point, the approach is easy enough. But this where it gets a more interesting. With the source track information now available, the program needs to find the same track on the target platform:

For Apple Music to Spotify conversion:

  1. Extract track name and artist from Apple Music
  2. Format a search query for the Spotify API: “{track_name} artist:{artist_name}”
  3. Send the search request to Spotify’s API
  4. Analyze the results to find the best match
  5. Create the Spotify URL from the matched track’s ID

For Spotify to Apple Music conversion:

  1. Extract track name and artist from Spotify
  2. Format a search query for the iTunes Search API: “{track_name} {artist_name}”
  3. Send the search request to iTunes API
  4. Filter results to only include songs from Apple Music
  5. Create the Apple Music URL from the matched track’s information

The matching algorithm uses several criteria to find the best result:

  • Exact matches on track name and artist (which obviously yields the highest confidence)
  • Fuzzy matching when exact matches aren’t found
  • Fallback matching using just the track name if artist matching fails
  • Duration comparison to ensure we’ve got the right version of a song

Following a fallback hierarchy like this proved to be useful especially when there are various versions of a song in either service. This may include something that was live, remastered during a certain year, performed live at Apple, performed live at Spotify, etc.

Ultimately, the goal is to get the closest possible track to the one available if the identical track cannot be found. And I talk about this a little more in-depth later in the article.

Result Caching and Optimization

To improve performance and reduce API calls, there’s also a system that does the following:

  1. Caches successful matches for frequently requested tracks
  2. Uses a tiered approach to searching (exact match first, then increasingly fuzzy searches)
  3. Handles common variations like remixes, live versions, and remastered tracks

This makes subsequent requests for the same track conversion nearly instantaneous.

The purpose here is not so much anticipating a lot of traffic but to simply gain experience in implementing a feature in a set of tools with which I’m less familiar.

In other words, this type of functionality is something commonly deployed in other systems I’m working on but I’ve not been exposed to it in the tech stack I’ve used to build TuneLink. This is a way to see how it’s done.

Error Handling and Fallbacks

This is another area where things became more challenging: Not all tracks exist on both platforms, so the algorithm needs to handle these cases gracefully.

As such, this is how the algorithm works:

  1. If no match is found, try searching with just the track name.
  2. If still no match, try searching with normalized track and artist names (removing special characters).
  3. If no match can be found, return a clear error message.
  4. Provide alternative track suggestions when possible.

Examples in which I saw this the most was when dealing with live tracks, remastered tracks, or platform-specific tracks (like Spotify Sessions).

The Full Algorithm

If you’re looking at this at a high-level or in a way in which you’d want to explain the algorithm using all of the details albeit at a high-level, it goes like this:

  1. Take the input URL from user
  2. Validate and parse URL to identify source platform
  3. Extract track ID and query source platform’s API for metadata
  4. Use metadata to search the target platform’s API
  5. Apply matching logic to find the best corresponding track
  6. Generate the target platform URL from the match results
  7. Return the matching URL to the user

After trying this out over several iterations, it become obvious that using only the Spotify and iTunes APIs was going to be insufficient. I needed a way to make sure the fallback mechanism would work consistently.

And that’s where a third-party API, MusicBrainz, helps to do the heavy lifting.

Matching Tracks with MusicBrainz

MusicBrainz is “an open music encyclopedia” that collects music metadata and makes it available to the public. In other words, it’s a Wikipedia for music information.

What makes it particularly valuable for TuneLink is:

  1. It maintains unique identifiers (MBIDs) for tracks, albums, and artists
  2. It provides rich metadata including alternate titles and release information
  3. It’s platform-agnostic, so it doesn’t favor either Spotify or Apple Music (or other platforms, for that matter).
  4. It has excellent coverage of both mainstream and independent music

It’s been really cool to see how the industry uses various pieces of metadata to identify songs and how we can leverage that when writing programs like this.

Integrating MusicBrainz in TuneLink

As far as the site’s architecture is concerned, think of MusicBrainz as an intermediary layer between Spotify and Apple Music. When using MusicBrainz, the program works like this:

  1. Extract track information from source platform (Spotify or Apple Music)
  2. Query MusicBrainz API with this information to find the canonical track entry
  3. Once we have the MusicBrainz ID, we can use it to search more accurately on the target platform

Using this service is what significantly improved matching between the two services because it provides more information than just the track name and the artist.

Edge Cases

MusicBrainz is particularly valuable for addressing challenging matching scenarios:

  • Multiple versions of the same song. MusicBrainz helps distinguish between album versions, radio edits, extended mixes, etc.
  • Compilation appearances. When a track appears on multiple albums, MusicBrainz helps identify the canonical version
  • Artist name variations. MusicBrainz maintains relationships between different artist names (e.g., solo work vs. band appearances)
  • International releases. MusicBrainz tracks regional variations of the same content

Even still, when there isn’t a one-to-one match, it’s almost always a sure bet to fallback to the studio recorded version of a track.

Fallbacks

To handle the case of when there isn’t a one-to-one match, this is the approach taken when looking to match tracks:

  1. First attempt. Direct MusicBrainz lookup using ISRCs (International Standard Recording Codes) when available
  2. Second attempt. MusicBrainz search using track and artist name
  3. Fallback. Direct API search on target platform if MusicBrainz doesn’t yield results

Clearly, I talked about Error Handling and Fallbacks earlier in the article. Incorporating this additional layer made results that more robust.

API Optimization

To keep TuneLink responsive, I implemented several optimizations for MusicBrainz API usage:

  • Caching. I cache MusicBrainz responses to reduce redundant API calls.
  • Rate Limiting. I carefully manage the query rate to respect MusicBrainz’s usage policies.
  • Batch Processing. Where possible, I group queries to minimize API calls.

Using MusicBrainz as the matching engine creates a more robust and accurate system than would be possible with direct platform-to-platform searches alone.

This approach has been key to delivering reliable results, especially for more obscure tracks or those with complex release histories.

The Tech Stack

The primary goal of the TuneLink site was to have a single page, responsive web application that I could quickly load on my phone or my desktop and that made deployments trivially easy (and free, if possible).

Frontend Technology

TuneLink is built on a modern JavaScript stack:

  • Next.js 15. The React framework that provides server-side rendering, API routes, and optimized builds
  • React 19. For building the user interface components
  • TypeScript. For type safety and improved developer experience
  • Tailwind CSS. For styling the application using utility classes
  • Zod. For runtime validation of data schemas and type safety

This combination gave the performance benefits of server-side rendering while maintaining the dynamic user experience of a single-page application.

Backend Services

The backend of TuneLink leverages several APIs and services:

  • Next.js API Routes. Serverless functions that handle the conversion requests
  • MusicBrainz API. The primary engine for canonical music metadata and track matching
  • Spotify Web API. For accessing Spotify’s track database and metadata
  • iTunes/Apple Music API. For searching and retrieving Apple Music track information
  • Music Matcher Service. A custom service I built to orchestrate the matching logic between platforms. Specifically, this is the service that communicates back and forth from the music streaming services and MusicBrainz.

Testing and QA

To ensure reliability, TuneLink includes:

  • Jest. For unit and integration testing
  • Testing Library. For component testing
  • Mock Service Worker. For simulating API responses during testing

Hosting and Infrastructure

TuneLink is hosted on a fully serverless stack:

  • Vercel. For hosting the Next.js application and API routes
  • Edge Caching. To improve response times for frequently requested conversions
  • Serverless Functions. For handling the conversion logic without maintaining servers

This serverless approach means TuneLink can scale automatically based on demand without requiring manual infrastructure management. Of course, an application of this size has little-to-no demand – this was more of a move to becoming more familiar with Vercel, deployments, and their services.

And for those of you who have historically read this blog because of the content on WordPress, but are are interested in or appreciate the convenience of Vercel, I highly recommend you take a look at Ymir by Carl Alexander. It’s serverless hosting but tailored to WordPress.

Development Environment

For local development, I use:

  • ESLint/TypeScript. For code quality and type checking
  • npm. For package management
  • Next.js Development Server. With hot module reloading for quick iteration

Why This Stack Over Others?

I chose this technology stack for several reasons:

  1. Performance. Next.js provides excellent performance out of the box
  2. Developer Experience. TypeScript and modern tooling improve code quality
  3. Scalability. The serverless architecture handles traffic spikes efficiently
  4. Maintainability. Strong typing and testing make the codebase more maintainable
  5. Cost-Effectiveness. Serverless hosting means we only pay for what we use

This combination of technologies allows TuneLink to deliver a fast, reliable service while keeping the codebase clean and maintainable. The serverless architecture also means zero infrastructure management, letting me focus on improving the core matching algorithm instead of worrying about servers.

Conclusion

The whole vibe coding movement is something to – what would you say? – behold, if nothing else, and there’s plenty of discussions happening around how all of this technology is going to affect the job economy across the board.

This is not the post nor am I the person to talk about that.

In no particular no order, these are the things that I’ve found to be most useful when working with AI and building programs (between work and side projects, there are other things I can – and may – discuss in future articles):

  • I know each developer seems to have their favorite LLM but Sonnet 3.7 has been and continues to be my preferred weapon of choice. It’s worked well across standard backend tools with PHP, has done well assisting in programs with Python, and obviously with what you see above.
  • The more explicit and almost demanding of what you can be with the LLM, the better. Don’t let it assume or attempt to anything without explicit approval and sign off.
  • Having a deeper understanding of computer science, software development, and engineering concepts in helpful primarily because it helps to avoid common problems that you may encounter when building for the web.
  • Thinking through algorithms, data structures, rate limits, scaling, and so on is helpful when prompting the LLM to generate certain features.
  • There are times when an attempt at a one-shot for a solution is fine, there are times when an attempt to one-shot a feature is better. I find that intuition helps drive this depending on the context in which you’re working, the program you’re trying to write, and the level of experience you have with the stack with which you’re working.
  • Remembering to generate tests for the features you’re working on and/or are refining should not be after thoughts. In my experience, even if an LLM generates subpar code, it does a good job writing tests that match your requirements which can, in turn, help to refine the quality of the feature in quest.
  • Regardless of if I’m working with a set of technologies with which I’m familiar or working with something on which I’m cutting my teeth, making sure that I’m integrating tests against the new features has been incredibly helpful in more than one occasion for ensuring the feature does what it’s supposed to do (and it helps to catch edge cases and “what about if the user does this?“). As convenient as LLMs are getting, they aren’t going to be acting like rogue humans. I’m think there’s a case to be made they often don’t act like highly skilled humans, either. But they’re extremely helpful.

This isn’t a comprehensive list and I think the development community, as a whole, is a doing a good job of sharing all of their learnings, their opinions, their hot takes, and all of that jazz.

I’ve no interest in making any type of statement that can be any type of take nor offering any quip that would fall under “thought leadership.” At this point, I’m primarily interested and concerned with how AIs can assist us and how we can interface with them in a way that forces them to work with us so we, in turn, are more efficient.

Ultimately, my goal is to share how I’ve used AI in an attempt to build something that I wanted and give a case study for exactly how it went. I could write much more about the overall approach and experience but there are other projects I’ve worked on and I am working on that lend themselves to this. Depending on how this is received, maybe I’ll write more.

If you’ve made it this far, I hope it’s been helpful and it’s help to cut through a lot of the commentary on AI and given a practical look and how it was used in a project. If I ever revisit TuneLink in a substantial way, I’ll be sure to publish more about it.

Move Fast but Understand Things

In Beware of the Makefile Effect, the author defines the phrase as such:

Tools of a certain complexity or routine unfamiliarity are not run de novo, but are instead copy-pasted and tweaked from previous known-good examples.

If you read the article, you’ll see that there are a number of examples given as to what is meant by the phrase.

Originally, makefiles were files used for C (or C++) build tools to help assemble a program. This is not unlike:

Just as developers have long been susceptible to the ‘Makefile Effect’ when it comes to configuration files, the rise of generative AI tools brings a new risk of compounding a lack of understanding. Like copy-pasting Makefiles, using AI-generated code without fully following how it works can lead to unintended consequences.

Though it absolutely helps us move faster in building The Thing™️, it’s worth noting: Many of these configuration files are the result of taking a working version and copying and pasting them into our project, tweaking a few things until it works, and then deploying it.

As it currently stands, we may not be copying and pasting pre-existing files, but generative AI may be close enough (if not a step further): It produces what we need and, if it doesn’t work, we can just tell it to keep tweaking the script based on whatever error is returned until we have something that works.

It’s obviously not limited to configuration files, either. Let’s include functions, classes, libraries, or full programs.

Again, the advantage this gives us now versus just a few years ago is great but failure to understand what’s being produced has compounding effects.

To that end, prompt the LLM to explain what each line or block or function is actually doing and then consider adding comments in your own words to explain it. This way, future you, or someone else, will have that much more context available (versus needing to feed the code back into an LLM for explanation) whenever the code is revisited.

Perhaps this will help to resist the makefile affect as well as a lack of understanding as to whatever code is being produced and ultimately maintained.

Maybe ChatGPT Didn’t Wreck Our Type of Content

To say that 2024 has been a year would be an understatement. Though I’m talking about things that have happened offline, the same can be said for the WordPress economy at large, too.

On a regrettable level, the degree at which I’ve written has decreased more this year than likely any other year since I’ve been writing. Some of this can be attributed to stage of life, some can be attributed to work, and some of this can be attributed to the rise of AI in our industry.

AI taking a bite out of WordPress (or something like that).

Over a year ago, I wrote that ChatGPT Wrecked Our Type of Content in which I claim:

Though the goals of this question are not mutually exclusive, I think getting an answer fast often outweighs the “I’m looking for an answer but it was neat to also read about someone else’s situation while searching for it.” And this is why ChatGPT has “wrecked” some of the content a bunch of us typically write.

But, as stated, it’s been over a year since this was written. And since I work in R&D in my current role, we’ve done – and continue to do – a lot of work with the various systems, applications, utilities, and so on.

Given that, I – like many of you – have recalibrated my perspective on how this changes the work we do.


ChatGPT Didn’t Wreck Our Type of Content

Improved Productivity

First, it’s undeniable that when used properly, AI assistants can vastly improve productivity. I run both Copilot and Cody in my editor as I’m consistently evaluating which one performs best for a given use case. At the time of this writing, I’m partial to Cody though I also know Copilot is going to support multiple LLMs in the coming months (or weeks?).

So, sure, AI assistants have changed the way the work in our day-to-day but, as the months have passed, I’m no longer convinced it’s “wrecked” our type of content so much as it’s “drastically altered” how we explain – for lack of a better word – our content.

One of which is more neutral than the other.

Large Context Windows but Lacking Context

Secondly, for as much as I typically work with ChatGPT, Gemini, and/or Claude (is there a clever acronym for all of these, yet?) on a daily basis, I find myself continuing to enjoy well-written content either in newsletters (see The WP Minute, The Repository, or Within WordPress) or blogs (see Brian Coords, what Mike is doing over with Ollie, and so on). Though I’m but one person, each of these properties or people continue to publish even though LLMs are available for any of us to use.

And that brings me to the final point: There are reasons AI hasn’t completely wrecked the type of content I – and others – have often published:

  • AI hallucinates. Recommendations provided within a given LLM are presented with an authoritative sense regardless of if the recommendations even use hooks, function names, or language features that don’t exist.
  • Lack of context. LLMs do not have the context as to how a given developer arrived at a solution and why one was chosen over another. Sure, you can ask for a variety of solutions and tradeoffs but there are times in which it’s still faster to read from someone who’s had the same experience, shared it, and provided contextual information as to how and why they arrived at a solution.
  • Aggressive Autocomplete. I’m a fan of using coding assistants within my IDE. As I said, the level of productivity and speed of solving problems has definitely increased, but that doesn’t mean its attempts to autocomplete a piece of functionality are always helpful. It still takes a critical eye to review what’s being proposed and determine whether or not it’s worth integrating.

There are likely more and your experience likely varies – but I suspect aren’t much different – from mine.

The Why Behind the How

The reason I share all of this is because one of the fundamental things that is missed when working solely with AI is the value that human beings bring to the table when sharing the why behind the how.

This is not me taking a position on whether or not AI will, can, should, or whatever other argument is the current hot topic replace humans. Instead, it’s me saying that although I appreciate the value AI has brought to our industry and I recognize it alters the need for certain types of content, I no longer think it completely negates or replaces the type of content about I – and others – used to write.

Sure, our approach may need to be tweaked but there’s still plenty of ways to regularly share what we’re working on, how to solve a certain problem, and why one solution was chosen versus another.

Finishing 2024, Into 2025

Given that 2024 is coming to a close in the coming weeks and that we seem to have accepted the role AI plays in the day-to-day work of software development, perhaps I can start writing somewhat regularly once again.

There’s no shortage of things I’ve built, learned, saved, and archived. And while others have continued to publish their stuff, I’ve missed doing the same. Perhaps the coming weeks – and coming year – is a time in which those of us who so frequently wrote about development can find our way to back to doing exactly that.

Maybe with a few alterations, though.

Software Developers and Technical Articles in the Era of AI

Site analytics are funny things regardless of how you use them (that is, through marketing, engagement, content, and so on). I say this because analytics give us information about:

  • how long people are reading our content (per article, even),
  • how many people are reading what we write,
  • how much people are reading what we write,
  • how often people are returning to read what we write,

And all of this coalesces into informing the things about which we write and how we write about it. At least, this is my experience.

The day to day experience of understanding analytics through the use of two mice.

Despite having all of this analytical information available, AI is changing the type of content we publish.

For technical writers specifically, this should give us pause on if there’s not a slightly wider range of related topics about which we can write that continue to contribute to the field in which we work.


Technical Articles in the Era of AI

As far as analytics, SEO, AI, and all of the other related technologies to blogging are concerned, I still hold to the mantra I’ve had for over a decade:

Write what you want on a given topic and don’t over think it.

This has proven the most useful and has transcended whatever changes have happened within the industry.

I primarily write because I enjoy the process, but it’s afforded opportunities that wouldn’t otherwise be available (the least of which isn’t developing solid friendships with people I’ve met in conferences or online).

Even still, I – like anyone else who’s maintained a site for a reasonable amount of time – still pay attention to some level of information analytics provide.

On Analytics

I rarely do a legitimate deep dive on the analytics of my site. Generally, I like to see:

  • the number of visitors over time,
  • how much time they are spending on the site or on each article,
  • and the bounce rate.

I’ve developed this habit in part because I’ve been writing technical content for so long it’s of some interest to see if people are [still] paying attention to it. But, as the advent of AI (see this post) has hit the mainstream of this industry, there’s been a change in how we all look up technical content.

And where analytics may have been useful for a very long time, there’s now another dimension to the field.

On AI

Given blogs and technical articles have helped train the LLMs we’ve so quickly adopted, it raises questions.

Is it useful to continue writing technical articles?

  • If the content we wrote helped train the LLMs then are new articles also continue to add to the data set the LLMs are using?
  • If more and more developers – myself included – are going to an LLM to help solve problems first (versus a search engine), how much less valuable are other sites and blogs becoming?

I still think there is value in treating a blog like a public notebook of sorts even if it’s just for personal reasons.

If not, then what else is there for technical writers to publish?

  • I don’t think there’s any shortage of content engineers have to write because so much of our job is more than just development, architecture, and so on.
  • The amount of things tangentially related or even adjacent to our work provide plenty of content that’s useful for other people to read (take articles like this and or this, for example).
  • As the technical aspect of our jobs may be enhanced – or substitute whatever word you’d like – by AI, there is still a human factor.

Clearly stated: As long as a human experience exists as something unique, it has potential to be an article that cannot be wholly generated through the statistical probability of words assembled through generative AI. (Though I’d be foolish to say that it can be a challenge to discern the difference between what a person has written and what has been generated.)

Writing technical articles does not have to be published into the void.

Though it may be easier to refer to ChatGPT for a technical question rather than a blog, that doesn’t mean a developer has nothing about which to write in relation to the field. For example, just as I could write about my day-to-day in working from home as a father of three an trying to maintain a schedule for reading, writing, exercising, music, work, and continued growth in what I do for a living, so can any one else. (Or so should everyone else?)

Software Developers Should Expand Topics

Ultimately, the way our work is altered through the advent of AI is undeniable. And though it may mean there are some changes in how we get our work done, it also informs how we can continue to contribute content related to what we do in our day-to-day. (This is something I used to do way back when, too.)

In other words, looking up how to properly sanitize data before it enters a database is going to be something the current – and the next – generation will ask an LLM. But looking up how to be productive as a remote engineer living in rural Georgia in a family of five, three of which are kids, is something AI cannot answer.

And perhaps that’s an area in which we could easily – and should – expand our content

Writing About WordPress in the Age of AI

Periodically, I review the content I’ve written over the last decade or so and am surprised to at some of the things I wrote about in the past (like My Day-to-Day). I also find it interesting that I stopped doing so. Then again, I likely exhausted that particular topic. At least for that time.

Specifically, I’m surprised that I used to write about such things despite the topics not really being relevant to what I consider my core content.

Personally, a lot has happened in the last, say, roughly five years alone – between changing jobs, growing the family moving, pursuing additional hobbies, and more – one of the things that’s taken a back seat is writing. Then again, though, isn’t that how it goes?

We have a finite number hours on how to spend our time and as that time gets allocated to other things, something gets squeezed out. And that’s what has happened with writing.

This gentleman fears the amount of time that’s passed. The perpetual ticking that surrounds him isn’t helping.

For a while, I felt guilty about it. Partially because writing daily was something that I enjoyed doing and that I did habitually. Partially because it had become such an habit that when I didn’t do it, I felt as if I was dropping the ball on something.

And though there’s truth in some of that (such as I miss writing every day), that doesn’t mean I’d trade out some of the things I’m doing that occupy that time now. Some of it’s related to my day-to-day work, some of it’s related to my family, and some of it’s related to other hobbies.


Over the holidays (and as I’m trying this, I realize I didn’t write a short Christmas post for this year which is likely the first time since I can remember not doing that), I had time to think about a lot different things some of which included both how I want to spend my time and how I currently spend my time.

Though I’m not one for setting resolutions, I’m for settings goals. And I was planning different goals for myself over the coming year, I couldn’t help but reflect a bit on this site.

Apparently, this is how Meta imagines me doing exactly what I just described. I’m drinking something out of a pepper container.

Sure, the goals would be fun to share (and maybe I will in a future post – I always enjoy what other people are planning!), I found myself thinking a little bit about software development, WordPress, the WordPress economy, where things have been, and where things are headed.

But writing about WordPress in the age of AI especially as developer is proving its own set of challenges. Of all types of people, though, shouldn’t we be here to meet it?

And with the rise of popularity in AI, the more-or-less standardization of the Block Editor, and the upcoming changes to the administration area UI, there’s a lot that can be discussed and there will likely be a lot about which to write (either via commentary or tutorials on how to achieve something).

When thinking through that, though, I found myself remembering all of things about which I used to write that weren’t always dedicated to programming but were still dedicated to what I, as a remote developer working in software developer in WordPress, was doing.

Why did I stop doing that? And what’s to stop me from doing that again?

I used to write differently about things. How did I get here?

Just as I do think tools such as ChatGPT has wrecked some of the content I (and others) have historically written, it’s by no means a call to inaction – or a call to stop writing. It’s just a call to adjust and keep moving forward.


Though I don’t know if I’ll ever write daily again, I do think there’s plenty I can share that extends beyond:

  1. Here’s what you may want to do in WordPress using PHP or JavaScript
  2. Here’s how you can do it.

So at the end of 2024, we’ll see how I’ve done. Here’s to a greater variety of content all the while still keeping the focus on the type of content about which I’ve historically written.

« Older posts

© 2025 Tom McFarlin

Theme by Anders NorenUp ↑