Tom McFarlin

Software Engineering in WordPress, PHP, and Backend Development

Use WP Bulk Plugin Updater to Automate WordPress Plugin Updates

For the last year or so, I’ve spent the majority of my time working on technologies adjacent to what I’ve historically done (case in point), but that doesn’t mean there aren’t times where I’m not still working directly with WordPress.

Recently, I’ve needed to manage a set of servers and sites that needed a round of updates. One of the most common set of updates needed to be performed was updating plugins with known CVEs. The process by which I like to do this is:

  1. Verify the site as a repository set up to manage the code (and set it up, if not).
  2. Also verify the repository has wired up to a CI/CD tool so branches, tags, and so on can be deployed across staging, beta, or production environments (and set it up, if not).

Then, the workflow I usually use is, at a high-level, like this:

  1. Create a develop branch off of master
  2. For each plugin update, create a new update/ branch (named something like update/plugin-slug
  3. Perform the actual update
  4. Then commit that branch to the repository.

Regardless of how many plugins you’re dealing with, this strategy allows you to update plugins independently of one another, perform a revert when necessary, cherry pick certain commits to set up a custom release, and so on.

Ultimately, it allows you to set up tags or, more generally, revisions for how you want to release your site without having to wait for every single plugin or dependency to have a problem resolved.

And it maintains good Git workflow practices.

That said, it’s tedious work. It’s boring enough if it’s, say, a dozen plugins but what if you’re maintaining a site or headless application that has twice – or even four times – that many?

You automate it. That’s why I wrote WP Bulk Plugin Updater.


WP Bulk Plugin Updater: Automate Updates

I describe WP Bulk Plugin Updater as:

A PHP tool that automates WordPress plugin updates with individual Git commits, push capabilities, and detailed logging for easy site maintenance.

The general idea behind the program is simple:

  • Find the list of all plugins that need to be updated,
  • Create a branch for each plugin that’s updated,
  • Commit each branch to the repository.

There are some caveats, though:

  • WP CLI. This must be installed. The program will verify that it is, in fact, available and will terminate early if not.
  • Git. This should be obviously, but the program is meant to be run in the root of the WordPress project’s installation (it can run in wp-content as well but I initially designed it to drop into the application’s root directory).
  • PHP 7.4. This is the minimum version of PHP required (though you’re always going to hear me pushing for 8+).

Assuming all of those are in place, then you’ve got everything you need for the program to do it’s thing.

The program will also generate two output files: One file tracks the list of plugins that were successfully updated. The other tracks what files need to be manually updated which usually comes in the case of premium plugins.

Finally, if you’re interested in running the program but not actually taking any action until you’re sure it’s going to do what you want, then you can use any of the available options:

  • --dry-run: Show what would be updated without making changes
  • --no-push: Update plugins and commit but don’t push to GitHub
  • --branch=BRANCH: Set the branch to push to (default: current branch)
  • --help: Display help information

All of this is available in the README in the repository.

Future State

The program isn’t officially tagged as I’ve been using master for the work I’ve been doing as of late. If it continues to work fine, then I’ll include a CHANGELOG, a proper ISSUE_TEMPLATE, and tag an official version.

Finally, if you use the Bulk Updater and find it useful, open any issues, feature requests, or PRs.

Case Study: Building TuneLink.io For Matching Music Across Services (with AI)

The two music streaming services I often swap between are Spotify and Apple Music. I prefer Spotify for a number of different reasons, the main reason being I like its music discovery algorithm more than any other service.

You have your own for your reasons.

But every now and then, there’s that case when someone sends me a song and, if I like it, I want to add it to Spotify. Or maybe I know they use Apple Music so I just want to send them a link directly to that song so they don’t have to try to find it on their own.

I know this isn’t an actual problem – it’s a minor inconvenience at best.

As the software development industry has moved so quickly with AI over the last few months (let alone the last few years), minor inconveniences become opportunities for building programs to alleviate their frustration.

And that’s what I did with TuneLink.io. In this article, you can read both what the web app does and how I built it with the help of AI and my opinion on using AI to build something outside of my wheelhouse.


TuneLink.io: Algorithms, APIs, Tech Stack, and AI

As the homepage describes:

TuneLink allows you to easily find the same song across different music streaming services. Simply paste a link from Spotify or Apple Music, and TuneLink will give you the matching track on the other platform.

A few things off the top:

  • It only works between Spotify and Apple Music (that is, it doesn’t include any other streaming services),
  • It’s not an iOS app so there’s no share sheet for easily sharing a URL to this site,
  • I do not pay for an Apple Developer License so the methods I used to find match music from Spotify and Apple Music are as precise as possible without an API access.

This is something I built to solve an inconvenience for me that I’m sharing here. And if it helps, great. There are also some learnings around the tech stack that I share later in the article, too. Further, I discuss how AI played a part in building it and I share a few thoughts on the benefits thereof.

So if you’re interested in how a backend engineer moves to using front-end services and serverless hosting, this article has you covered.

Recall, the primary inconvenience I wanted to resolve was being able to share an accurate link to a song to a friend who’s using a different music service than I do.

Similarly, I want to be able to copy a URL from a message that I receive on my phone or my desktop, paste it into the input field, and then have it generate an application link to automatically open it in my preferred music service.

It does exactly that and only that and you can give it try, if you’re interested.

All things considered (that is the desired architecture, how I wanted it to work, and experience with a number of LLMs), it took very little time to build I’ve not bothered sharing the site with anyone else (mainly because it’s for me). That said, there is a GitHub repository available in which you can file issues, feature requests, pull requests, and all of the usual.

But, as of late, I’ve enjoyed reading how others in this field build these types of things, so I’m doing the same. It’s lengthy so if you’re only interested in the utility itself, you have the URL; otherwise, read on.


How I Built TuneLink.io (Algorithms, APIs, and AI)

Early, I said the key to being able to build something like this – as simple as it is – is accelerated by having several key concepts and levels of experience in place.

This includes, but is not limited to:

  • Being able to clearly articulate the problem within a prompt,
  • Forcing the LLM to ask you questions to clarify understanding and knowing how to articulate a clear response to it,
  • Knowing exactly how the algorithm should work at a high-level,
  • Getting the necessary API keys from the services needed and making sure you’re properly incorporating them into local env files and setting up gitignore properly so not to leak information where needed,
  • Having a plan for how you want the app to function,
  • Preparing the necessary hosting infrastructure for hosting,
  • And knowing certain underlying concepts that can help an LLM get “un-stuck” whenever you see it stating “Ah, I see the problem,” when it definitely does not, in fact, see the problem (as the kids say, iykyk).

Okay, with that laid as the foundation for how I approached this, here’s the breakdown of the algorithm, dependencies, APIs, and the tech stack used to build and deploy this.

And remember: All TuneLink is is a single-page web app that converts URLs from one music service to another and opens the track in the opposite music service.

The Algorithm

URL Analysis and Detection

The first step in the process is determining what information is available with which to work. When a user pastes a URL into TuneLink, the application needs to:

  1. Validate that the URL is properly formatted,
  2. Check the domain to identify the source platform,
  3. Extract the unique identifiers from the URL.

For example, Spotify URLs follow patterns like:

  • https://open.spotify.com/track/{track_id}
  • https://open.spotify.com/album/{album_id}/track/{track_id}

While Apple Music URLs look like:

  • https://music.apple.com/us/album/{album-name}/{album_id}?i={track_id}

The algorithm uses regular expressions to match these patterns and extract the critical identifiers. If the URL doesn’t match any known pattern, it returns an error asking for a valid music URL.

Extracting Track Information

Once the program has identified the platform and extracted the IDs, it needs to gather metadata about the track:

  1. For Spotify URLs: Query the Spotify Web API using the track_id
  2. For Apple Music URLs: Query the Apple Music/iTunes API using the track_id
  3. Extract the essential information: track name, artist name, album name

Since I’m not using an Apple Developer License, the iTunes API was easier to access as it doesn’t require any privileged data to access it.

This metadata becomes my search criteria for finding the equivalent track on the other platform. The more information I can extract, the better my chances of finding an accurate match. More specifically, there’s an interstitial API I used in conjunction with this information that I’ll discuss more in this article.

Cross-Platform Track Matching

Up to this point, the approach is easy enough. But this where it gets a more interesting. With the source track information now available, the program needs to find the same track on the target platform:

For Apple Music to Spotify conversion:

  1. Extract track name and artist from Apple Music
  2. Format a search query for the Spotify API: “{track_name} artist:{artist_name}”
  3. Send the search request to Spotify’s API
  4. Analyze the results to find the best match
  5. Create the Spotify URL from the matched track’s ID

For Spotify to Apple Music conversion:

  1. Extract track name and artist from Spotify
  2. Format a search query for the iTunes Search API: “{track_name} {artist_name}”
  3. Send the search request to iTunes API
  4. Filter results to only include songs from Apple Music
  5. Create the Apple Music URL from the matched track’s information

The matching algorithm uses several criteria to find the best result:

  • Exact matches on track name and artist (which obviously yields the highest confidence)
  • Fuzzy matching when exact matches aren’t found
  • Fallback matching using just the track name if artist matching fails
  • Duration comparison to ensure we’ve got the right version of a song

Following a fallback hierarchy like this proved to be useful especially when there are various versions of a song in either service. This may include something that was live, remastered during a certain year, performed live at Apple, performed live at Spotify, etc.

Ultimately, the goal is to get the closest possible track to the one available if the identical track cannot be found. And I talk about this a little more in-depth later in the article.

Result Caching and Optimization

To improve performance and reduce API calls, there’s also a system that does the following:

  1. Caches successful matches for frequently requested tracks
  2. Uses a tiered approach to searching (exact match first, then increasingly fuzzy searches)
  3. Handles common variations like remixes, live versions, and remastered tracks

This makes subsequent requests for the same track conversion nearly instantaneous.

The purpose here is not so much anticipating a lot of traffic but to simply gain experience in implementing a feature in a set of tools with which I’m less familiar.

In other words, this type of functionality is something commonly deployed in other systems I’m working on but I’ve not been exposed to it in the tech stack I’ve used to build TuneLink. This is a way to see how it’s done.

Error Handling and Fallbacks

This is another area where things became more challenging: Not all tracks exist on both platforms, so the algorithm needs to handle these cases gracefully.

As such, this is how the algorithm works:

  1. If no match is found, try searching with just the track name.
  2. If still no match, try searching with normalized track and artist names (removing special characters).
  3. If no match can be found, return a clear error message.
  4. Provide alternative track suggestions when possible.

Examples in which I saw this the most was when dealing with live tracks, remastered tracks, or platform-specific tracks (like Spotify Sessions).

The Full Algorithm

If you’re looking at this at a high-level or in a way in which you’d want to explain the algorithm using all of the details albeit at a high-level, it goes like this:

  1. Take the input URL from user
  2. Validate and parse URL to identify source platform
  3. Extract track ID and query source platform’s API for metadata
  4. Use metadata to search the target platform’s API
  5. Apply matching logic to find the best corresponding track
  6. Generate the target platform URL from the match results
  7. Return the matching URL to the user

After trying this out over several iterations, it become obvious that using only the Spotify and iTunes APIs was going to be insufficient. I needed a way to make sure the fallback mechanism would work consistently.

And that’s where a third-party API, MusicBrainz, helps to do the heavy lifting.

Matching Tracks with MusicBrainz

MusicBrainz is “an open music encyclopedia” that collects music metadata and makes it available to the public. In other words, it’s a Wikipedia for music information.

What makes it particularly valuable for TuneLink is:

  1. It maintains unique identifiers (MBIDs) for tracks, albums, and artists
  2. It provides rich metadata including alternate titles and release information
  3. It’s platform-agnostic, so it doesn’t favor either Spotify or Apple Music (or other platforms, for that matter).
  4. It has excellent coverage of both mainstream and independent music

It’s been really cool to see how the industry uses various pieces of metadata to identify songs and how we can leverage that when writing programs like this.

Integrating MusicBrainz in TuneLink

As far as the site’s architecture is concerned, think of MusicBrainz as an intermediary layer between Spotify and Apple Music. When using MusicBrainz, the program works like this:

  1. Extract track information from source platform (Spotify or Apple Music)
  2. Query MusicBrainz API with this information to find the canonical track entry
  3. Once we have the MusicBrainz ID, we can use it to search more accurately on the target platform

Using this service is what significantly improved matching between the two services because it provides more information than just the track name and the artist.

Edge Cases

MusicBrainz is particularly valuable for addressing challenging matching scenarios:

  • Multiple versions of the same song. MusicBrainz helps distinguish between album versions, radio edits, extended mixes, etc.
  • Compilation appearances. When a track appears on multiple albums, MusicBrainz helps identify the canonical version
  • Artist name variations. MusicBrainz maintains relationships between different artist names (e.g., solo work vs. band appearances)
  • International releases. MusicBrainz tracks regional variations of the same content

Even still, when there isn’t a one-to-one match, it’s almost always a sure bet to fallback to the studio recorded version of a track.

Fallbacks

To handle the case of when there isn’t a one-to-one match, this is the approach taken when looking to match tracks:

  1. First attempt. Direct MusicBrainz lookup using ISRCs (International Standard Recording Codes) when available
  2. Second attempt. MusicBrainz search using track and artist name
  3. Fallback. Direct API search on target platform if MusicBrainz doesn’t yield results

Clearly, I talked about Error Handling and Fallbacks earlier in the article. Incorporating this additional layer made results that more robust.

API Optimization

To keep TuneLink responsive, I implemented several optimizations for MusicBrainz API usage:

  • Caching. I cache MusicBrainz responses to reduce redundant API calls.
  • Rate Limiting. I carefully manage the query rate to respect MusicBrainz’s usage policies.
  • Batch Processing. Where possible, I group queries to minimize API calls.

Using MusicBrainz as the matching engine creates a more robust and accurate system than would be possible with direct platform-to-platform searches alone.

This approach has been key to delivering reliable results, especially for more obscure tracks or those with complex release histories.

The Tech Stack

The primary goal of the TuneLink site was to have a single page, responsive web application that I could quickly load on my phone or my desktop and that made deployments trivially easy (and free, if possible).

Frontend Technology

TuneLink is built on a modern JavaScript stack:

  • Next.js 15. The React framework that provides server-side rendering, API routes, and optimized builds
  • React 19. For building the user interface components
  • TypeScript. For type safety and improved developer experience
  • Tailwind CSS. For styling the application using utility classes
  • Zod. For runtime validation of data schemas and type safety

This combination gave the performance benefits of server-side rendering while maintaining the dynamic user experience of a single-page application.

Backend Services

The backend of TuneLink leverages several APIs and services:

  • Next.js API Routes. Serverless functions that handle the conversion requests
  • MusicBrainz API. The primary engine for canonical music metadata and track matching
  • Spotify Web API. For accessing Spotify’s track database and metadata
  • iTunes/Apple Music API. For searching and retrieving Apple Music track information
  • Music Matcher Service. A custom service I built to orchestrate the matching logic between platforms. Specifically, this is the service that communicates back and forth from the music streaming services and MusicBrainz.

Testing and QA

To ensure reliability, TuneLink includes:

  • Jest. For unit and integration testing
  • Testing Library. For component testing
  • Mock Service Worker. For simulating API responses during testing

Hosting and Infrastructure

TuneLink is hosted on a fully serverless stack:

  • Vercel. For hosting the Next.js application and API routes
  • Edge Caching. To improve response times for frequently requested conversions
  • Serverless Functions. For handling the conversion logic without maintaining servers

This serverless approach means TuneLink can scale automatically based on demand without requiring manual infrastructure management. Of course, an application of this size has little-to-no demand – this was more of a move to becoming more familiar with Vercel, deployments, and their services.

And for those of you who have historically read this blog because of the content on WordPress, but are are interested in or appreciate the convenience of Vercel, I highly recommend you take a look at Ymir by Carl Alexander. It’s serverless hosting but tailored to WordPress.

Development Environment

For local development, I use:

  • ESLint/TypeScript. For code quality and type checking
  • npm. For package management
  • Next.js Development Server. With hot module reloading for quick iteration

Why This Stack Over Others?

I chose this technology stack for several reasons:

  1. Performance. Next.js provides excellent performance out of the box
  2. Developer Experience. TypeScript and modern tooling improve code quality
  3. Scalability. The serverless architecture handles traffic spikes efficiently
  4. Maintainability. Strong typing and testing make the codebase more maintainable
  5. Cost-Effectiveness. Serverless hosting means we only pay for what we use

This combination of technologies allows TuneLink to deliver a fast, reliable service while keeping the codebase clean and maintainable. The serverless architecture also means zero infrastructure management, letting me focus on improving the core matching algorithm instead of worrying about servers.

Conclusion

The whole vibe coding movement is something to – what would you say? – behold, if nothing else, and there’s plenty of discussions happening around how all of this technology is going to affect the job economy across the board.

This is not the post nor am I the person to talk about that.

In no particular no order, these are the things that I’ve found to be most useful when working with AI and building programs (between work and side projects, there are other things I can – and may – discuss in future articles):

  • I know each developer seems to have their favorite LLM but Sonnet 3.7 has been and continues to be my preferred weapon of choice. It’s worked well across standard backend tools with PHP, has done well assisting in programs with Python, and obviously with what you see above.
  • The more explicit and almost demanding of what you can be with the LLM, the better. Don’t let it assume or attempt to anything without explicit approval and sign off.
  • Having a deeper understanding of computer science, software development, and engineering concepts in helpful primarily because it helps to avoid common problems that you may encounter when building for the web.
  • Thinking through algorithms, data structures, rate limits, scaling, and so on is helpful when prompting the LLM to generate certain features.
  • There are times when an attempt at a one-shot for a solution is fine, there are times when an attempt to one-shot a feature is better. I find that intuition helps drive this depending on the context in which you’re working, the program you’re trying to write, and the level of experience you have with the stack with which you’re working.
  • Remembering to generate tests for the features you’re working on and/or are refining should not be after thoughts. In my experience, even if an LLM generates subpar code, it does a good job writing tests that match your requirements which can, in turn, help to refine the quality of the feature in quest.
  • Regardless of if I’m working with a set of technologies with which I’m familiar or working with something on which I’m cutting my teeth, making sure that I’m integrating tests against the new features has been incredibly helpful in more than one occasion for ensuring the feature does what it’s supposed to do (and it helps to catch edge cases and “what about if the user does this?“). As convenient as LLMs are getting, they aren’t going to be acting like rogue humans. I’m think there’s a case to be made they often don’t act like highly skilled humans, either. But they’re extremely helpful.

This isn’t a comprehensive list and I think the development community, as a whole, is a doing a good job of sharing all of their learnings, their opinions, their hot takes, and all of that jazz.

I’ve no interest in making any type of statement that can be any type of take nor offering any quip that would fall under “thought leadership.” At this point, I’m primarily interested and concerned with how AIs can assist us and how we can interface with them in a way that forces them to work with us so we, in turn, are more efficient.

Ultimately, my goal is to share how I’ve used AI in an attempt to build something that I wanted and give a case study for exactly how it went. I could write much more about the overall approach and experience but there are other projects I’ve worked on and I am working on that lend themselves to this. Depending on how this is received, maybe I’ll write more.

If you’ve made it this far, I hope it’s been helpful and it’s help to cut through a lot of the commentary on AI and given a practical look and how it was used in a project. If I ever revisit TuneLink in a substantial way, I’ll be sure to publish more about it.

Remove Empty Shortcodes 0.6.0

In 2019, I wrote a WordPress plugin that’s primary feature was to prevent unused shortcodes from rendering in the content whenever a page was loaded.

It worked well enough for a little while. Then time passed.

During that time (and for one reason or another):

  • I removed all of my plugins from the WordPress plugin repository,
  • I focused on a number of different things both professionally and personally,
  • And though I let this plugin remain on GitHub (as opposed to archiving it), I stopped maintaining it.

Just recently, I came across the problem of rendering orphaned shortcodes again and I noticed someone had left an issue in GitHub. So I decided to rewrite the plugin, tag a release on GitHub, and release it into the WordPress plugin repository.


Remove Empty Shortcodes, Again

The original repository page for Remove Empty Shortcodes.

When I first wrote this plugin, I had stopped using Restrict Content Pro but the shortcodes were still littered throughout various posts in my archive. And this exposed a larger problem with shortcodes as a whole:

If a user installs a plugin that uses shortcodes and then deactivates the plugin, the shortcode will still render in the content of the post.

Obviously, this muddies the content for readers by leaving artifacts of code that’s no longer running. So rather than query the database for shortcodes that were orphaned in my content or just remove a single plugin’s shortcodes, it seemed easier to do something else: Automatically remove empty or inactive shortcodes from my WordPress content while preserving the original database entries.

And that’s what this plugin does. Specifically, it intercepts the content before it’s rendered, removes the shortcodes, then passes the rest of the data back to the main process to render the content.

The updated repository page for Remove Empty Shortcodes.

This ensures that if you ever reactivate the plugin, the shortcode still exists and will work as intended.

As of this post – and this version – the plugin only works on post and page post types.

You can read all about the details for the plugin (including the FAQ) on both GitHub or the Plugin Repository but here’s the gist of information relevant to this post:

How It Works

The plugin checks your content for shortcodes when pages are displayed. If it finds shortcodes that:

  • Don’t produce any output
  • Aren’t registered with WordPress
  • Are empty or inactive

Then it removes the shortcodes from the content before rendering it in the browser.

Use Cases

  • Clean up content after removing plugins that used shortcodes
  • Remove inactive shortcodes without editing posts manually
  • Maintain clean content for readers and search engines
  • Preserve original content in case you reinstall removed plugins

Conclusion

It’s been a long time since I’ve released a plugin in the WordPress Plugin Repository (regardless of how large or small) let alone bothered writing about one on this site.

But since this is one of those things that I’m using on a site with over a decade and a half of content, it may be useful for someone else, as well.

And yes, there additional things that thing plugin could do and maybe it will but that will largely depend on adoption or my own needs or both.

Code Standard Selector for Visual Studio Code

A lot of the PHP that I currently write uses one of two standards: PSR12 or WordPress (though there are some times where I’ll pull up another project with a different standard).

For years, my standard approach to changing code standards in the IDE has been to do the following:

  1. Install the standard that’s required (if I don’t already have it),
  2. Modify settings.json in Visual Studio Code so that it uses the same standard used in the rest of the project.

It’s a little cumbersome but it worked well enough. Overtime, I end up with a lot of settings commented out that I enable based on the project.

But this was getting tedious.

Instead, I preferred to quickly select and change coding standards within the IDE via the command palette or, really, a shortcut. So I wrote a Visual Studio Code Extension to do exactly that.


Code Standard Selector

PHP Code Standard Selector is a Visual Studio Code extension that makes it easy to switch your PHP coding standard without having to edit any settings in your IDE.

Using this extension, you can view and select the coding standards in three ways:

  • The command palette, type > Select Code Standard
  • A shortcut, CMD+ALT+S or whatever the equivalent may be on Windows and Linux,
  • The status bar, which shows the currently selected standard and gives you the ability to click on said standard to change the standard

All three of these options render the same menu: A list of all of the standards installed on your system. Once selected, the extension will then automatically set that standard as the active standard and apply it to your project.

Prerequisites

Note, however, there are a few prerequisites to use this extension. Code Standard Selector assumes – and requires – you have the following set up on your system:

PHP CodeSniffer is usually installed via Composer and PHP Sniffer & Beautifier (abbreviated as PHPSAB in Visual Studio Code) is installed via the Extensions Marketplace.

And if PHP CodeSniffer is installed at the project level, it’s easy enough to update the paths to phpcs and phpcbf in your User Settings or Workspace Settings.

Installing The Extension

You can find it in the Visual Studio Code Marketplace in your browser or searching for “Select Code Standard” in the Extensions Marketplace in the IDE itself.

Or, if you prefer, you can download the lastest vsix release from the GitHub repository (where you can also grab the code, open issues, feature requests, and all of the usual options provided by a repository).

How It Works

Select Code Standard will check to make sure that the PHP Sniffer & Beautifier is installed and, if not, prompt you to install it before allowing you to actually use the extension.

Obviously, installed that particular extension implies you have at least one set of coding standards installed on your system.

Once installed, Select Code Standard will then generate a list of all standards installed on your system (by using phpcs -i) and use that to render the list of available standards.

When you select a standard, it will then use the value of the standard to tell PHP Sniffer & Beautifier what to use and it will update the extension and status bar with the standard currently in use.

Example Configuration

If you’ve not used PHP Sniffer & Beautifier before and you’re looking to get up and running quickly, here’s an example of my configuration in settings.json:

"phpsab.executablePathCS": "/Users/tommcfarlin/.composer/vendor/bin/phpcs",
"phpsab.executablePathCBF": "/Users/tommcfarlin/.composer/vendor/bin/phpcbf",
"phpsab.fixerEnable": true,
"phpsab.snifferShowSources": true,
"phpsab.standard": "PSR12",
"php.validate.run": "onSave",
"": {
    "editor.formatOnSave": true
},

Notice the line that contains phpsab.standard. This is the one that Select Code Standard will modify when you select your own standard from the extension’s interface.

Issues, Requests, Future Versions, etc.

I built this extension for me because I wanted to have an easy way to quickly change standards (and because I’d never built an extension for Visual Studio Code before).

If you’re a developer using PHP and have a similar set up – or are looking for a way to update your set up to something that works well with the aforementioned configuration – maybe this extension will help.

Further, I’ve set up templates in the GitHub repository for opening issues, bug reports, feature requests, and so on. You can read more about the plugin in the README, as well.

Finally, although the plugin can be automatically updated from within the Extensions Marketplace, each version will be released on GitHub prior to deploying in the marketplace. So if you typically follow – or star – repositories to track development, that’s an option.

With that said, I’m already using the Select Code Standard and it’s serving its purpose exactly as I need. If it works for you, great. And if you have issues, requests, or anything else, please open an issue.

Move Fast but Understand Things

In Beware of the Makefile Effect, the author defines the phrase as such:

Tools of a certain complexity or routine unfamiliarity are not run de novo, but are instead copy-pasted and tweaked from previous known-good examples.

If you read the article, you’ll see that there are a number of examples given as to what is meant by the phrase.

Originally, makefiles were files used for C (or C++) build tools to help assemble a program. This is not unlike:

Just as developers have long been susceptible to the ‘Makefile Effect’ when it comes to configuration files, the rise of generative AI tools brings a new risk of compounding a lack of understanding. Like copy-pasting Makefiles, using AI-generated code without fully following how it works can lead to unintended consequences.

Though it absolutely helps us move faster in building The Thing™️, it’s worth noting: Many of these configuration files are the result of taking a working version and copying and pasting them into our project, tweaking a few things until it works, and then deploying it.

As it currently stands, we may not be copying and pasting pre-existing files, but generative AI may be close enough (if not a step further): It produces what we need and, if it doesn’t work, we can just tell it to keep tweaking the script based on whatever error is returned until we have something that works.

It’s obviously not limited to configuration files, either. Let’s include functions, classes, libraries, or full programs.

Again, the advantage this gives us now versus just a few years ago is great but failure to understand what’s being produced has compounding effects.

To that end, prompt the LLM to explain what each line or block or function is actually doing and then consider adding comments in your own words to explain it. This way, future you, or someone else, will have that much more context available (versus needing to feed the code back into an LLM for explanation) whenever the code is revisited.

Perhaps this will help to resist the makefile affect as well as a lack of understanding as to whatever code is being produced and ultimately maintained.

« Older posts

© 2025 Tom McFarlin

Theme by Anders NorenUp ↑