In WordPress, most migrations can be easily done by:
- Using a web-based utility such as Migrate DB Pro or PhpMyAdmin,
- Using a database front-end application like TablePlus,
- Or even using the
mysql command-line for importing an SQL dump.
None of these are bad options but each present their own set of challenges especially with large migrations:
- web-based applications can timeout,
- front-end applications may lack some functionality for getting insight into what’s going on (or not giving clear information on what something such as a lock is during its process list),
- and terminal applications like
mysql can do the job but give very little insight on to where to status of the import stands.
If you’re working with a relatively small dataset, then any of the above options would be fine (and I’m usually partial to TablePlus 🙂) or the command-line but when importing a database that was roughly 30GB in size, I needed something else to help.
- A web application would time out,
- A front-end application lacked feedback on time or progress,
- A terminal application was just waiting for the process to finish with no insight as to what’s going on.
And this is where
pv comes into play.
TL;DR: In this quick tip, I provide a simple query for determine the size of a database (that includes all tables). This is useful when determine what the largest tables are, especially if there are custom tables in your WordPress installation, and how large they actually are.
When working with queries in WordPress, it’s always best to prepare the query before running it; however, if you happen to be interfacing with a third-party API or working with another variant of SQL (like Transact-SQL), then things may work a little differently.
If only databases were really this neat.
The TL;DR of all of this is that handling sprintf and LIKE in T-SQL needs some strategic handling. And the code sample in the next section should provide all you need.
At some point in every developer’s career, there is going to be a time where you do something that tanks production.
- Maybe you’ll push code that ends up busting a cache that serves data to millions of people,
- Perhaps you’ll update an application and end up blowing away information that’s not backed up,
- Or maybe you’ll push a change that “works on your machine” but completely hoses the source control repository.
And there are plenty of other examples. I’m sure you can quickly name five more yourself.
I’ve committed (pun intended, sort of) my fair share of all of the above but one of the things that I see from people working in our space.
That is, those who work with database-backed web applications – is the lack of understanding of database organization at the file system level and how it is possible to reconstruct data even when you don’t have a standard backup off of which to work.
In this post, I’m going to take a deep dive into MySQL database organization at the file system level, how you can restore information from that versus a backup file should you find yourself in that situation, and provide references (or bookmarks) should you need them.
When working with WordPress databases, we’re pretty fortunate to have tools like WP Migrate DB Pro, phpMyAdmin, Sequel Pro, and so on. But there are times when importing a large database is a bit more demanding than the front-end utilities we have.
Perhaps we have to change up our PHP configuration, perhaps we’re dealing with hundreds of megabytes worth of data, perhaps we don’t have the kind of control on the environment that we need.
Whatever the case, we always have the command line. And if you’re working with WordPress, MAMP, and you’re faced with importing large databases, then you may need to stop tweaking your PHP settings and simply import the file via the command-line.