Fixing MySQL Database Connection Error in WordPress

A Guide to Fixing Database Issues When Your WordPress Site Goes Down

Problem

When your WordPress site suddenly shows “Error establishing a database connection,” it’s typically a sign that WordPress can’t communicate with its MySQL database. In our case, this happened on a LAMP stack (Linux, Apache, MySQL, PHP) running on a Google Cloud VM. This error can be particularly stressful as it takes down your entire site, so let’s walk through how we diagnosed and fixed it.

Diagnosis

Let’s break down each step of our investigation and what each command told us:

  • First Steps: After SSHing into the VM (using Google Cloud’s SSH in browser or your local terminal), we needed to check if MySQL was even running. The command sudo systemctl status mysql showed MySQL was down – our first major clue.
  • Starting MySQL: We tried sudo systemctl start mysql, but it failed silently. This is common with MySQL – it fails but doesn’t always tell you why. That’s why we tried sudo mysqld directly, which gave us more information (failed due to root permissions – MySQL won’t run as root for security reasons).
  • Finding the Logs: Here’s where it gets interesting. MySQL’s logs are crucial for troubleshooting, but their location isn’t always obvious. We used sudo grep -r "error\|log" /etc/mysql/ – this recursively searches the MySQL config directory for any mentions of ‘error’ or ‘log’, helping us find the log file at /var/log/mysql/error.log.
  • Permission Investigation: The commands ls -l /var/lib/mysql and ls -l /var/log/mysql showed us who owned these directories. This is crucial because MySQL needs proper ownership to function. When we saw “permission denied”, we knew we had a possible cause.
  • Fixing Permissions: We used sudo su - to become root (needed for ownership changes), then chown -R mysql:mysql /var/lib/mysql /var/log/mysql to give MySQL ownership of its directories. The -R flag recursively changes ownership of all files and subdirectories.
  • The Real Problem: Even after fixing permissions, MySQL wouldn’t start. The command sudo tail -n 1000 /var/log/mysql/error.log showed us why: “No space left on device”. This command reads the last 1000 lines of the log file, where we found our smoking gun.
  • Disk Space Check: df -h confirmed it – our disk was 100% full. The -h flag shows sizes in human-readable format (GB, MB, etc.) instead of bytes.

Solution

While the Google Cloud Console offers disk resizing options, we opted for a quicker solution: cleaning up unnecessary log files. Here’s why and how:

  1. Finding Space Hogs: Used du -h / | sort -rh | head -20 to identify the largest files and directories. This command is incredibly useful:
    du -h shows disk usage in human-readable format
    sort -rh sorts by size, largest first
    head -20 shows only the top 20 entries
  2. Cleaning Up: Found several old log files we could safely remove, particularly:
    – Old compressed logs in /var/log
    – Rotated MySQL logs
    – Apache access logs from previous months
  3. Restarting MySQL: After clearing space, sudo systemctl start mysql worked perfectly
  4. Verification: The WordPress site came back online immediately after MySQL started

Key Commands Used

Here’s a quick reference of all commands used, with brief explanations of what each one does:

  • sudo systemctl status mysql – Check MySQL’s current state
  • sudo systemctl start mysql – Attempt to start MySQL service
  • sudo mysqld – Try to start MySQL daemon directly (useful for error messages)
  • sudo grep -r "error\|log" /etc/mysql/ – Search for log file locations in MySQL config
  • ls -l /var/lib/mysql – Check MySQL data directory permissions
  • ls -l /var/log/mysql – Check MySQL log directory permissions
  • sudo su - – Become root user for system-level changes
  • chown -R mysql:mysql /var/lib/mysql /var/log/mysql – Fix directory ownership
  • sudo tail -n 1000 /var/log/mysql/error.log – Read recent MySQL error messages
  • df -h – Check disk space usage in human-readable format
  • du -h / | sort -rh | head -20 – Find the largest files eating up space

Additional Issue: WordPress Upload Problems

After fixing the database connection, we hit another issue: WordPress showed “Uploading: Missing a temporary folder” when trying to upload media files.

  1. Created phpinfo() file to find the correct php.ini
  2. In php.ini, uncommented and set: upload_tmp_dir = /tmp
  3. Restarted Apache: sudo systemctl restart apache2

Lessons Learned

  • Disk Space is Critical: Always check disk space first when databases fail – it’s a common culprit that’s easy to overlook.
  • Log Files Add Up: Regular log rotation and cleanup is crucial, especially on smaller VMs.
  • Backup First: Before deleting any files, even logs, make sure you have recent backups.
  • Test Uploads: After disk cleanup, always verify WordPress file uploads work.

Join Our Community!

🌟 Get exclusive insights and the latest IT tools and scripts, straight to your inbox.

🔒 We respect your privacy. Unsubscribe at any time.

Andy N

Information Technology Support Analyst with over seven years of experience (in the telecommunications and manufacturing industries) ranging from user support to administering and maintaining core IT systems.

Related Posts

×