Duplicate entries in activitylog_logclass table

A couple of clients reported errors related to duplicate entries in the activitylog_logclass database table during database imports or on upgrades.

The error looks similar to:

Operations to perform:
  Apply all migrations: activitylog, admin, auth, authtoken, billing, conf, contenttypes, core, cpanel, google_authenticator, logger, notifications, openstack, osbackup, osbilling, pkm, reports, reseller, servers, service_catalogue, sessions, sites, sms_authenticator, tasklog, todo
Running migrations:
  Applying activitylog.0006_alter_logclass_unique_together...Traceback (most recent call last):
  File "/var/webapps/fleio/env/lib/python3.10/site-packages/django/db/backends/utils.py", line 89, in _execute
    return self.cursor.execute(sql, params)
  File "/var/webapps/fleio/env/lib/python3.10/site-packages/django/db/backends/mysql/base.py", line 75, in execute
    return self.cursor.execute(query, args)
  File "/var/webapps/fleio/env/lib/python3.10/site-packages/MySQLdb/cursors.py", line 206, in execute
    res = self._query(query)
  File "/var/webapps/fleio/env/lib/python3.10/site-packages/MySQLdb/cursors.py", line 319, in _query
    db.query(q)
  File "/var/webapps/fleio/env/lib/python3.10/site-packages/MySQLdb/connections.py", line 254, in query
    _mysql.connection.query(self, query)
MySQLdb.IntegrityError: (1062, "Duplicate entry 'staff_log_in_denied-error-1' for key 'activitylog_logclass_name_type_category_id_1b8d09c9_uniq'")

This seems to be a MariaDB bug that we haven’t been able to identify as a known MariaDB issue. It might be related to forced server shut downs (like on power loss). Even in this case, MariaDB must be able to enforce the UNIQUE constraint. But it’s not enforcing it, since there are duplicate entries in the activitylog_logclass table. That is, multiple records with the same name and type field values.

How to fix duplicate entries in activitylog_logclass table

The following steps are confirmed to permanently fix the problem.

Warning

Note that a downtime occurs, since fleio is stopped and the production database is recreated.

It is highly recommended that you test this procedure in a non-production environment before running it on production.

  1. Stop fleio

Run this if you have version 2022.10.0 or newer:

fleio stop

for older versions run:

sudo -i -u fleio
cd ~/compose
docker compose down
  1. Start db container and remove duplicate records

# if you've not already sudo as fleio user:
sudo -i -u fleio
cd ~/compose
docker compose up -d db
fleio bash
. ../env/bin/activate
python fleio/utils/scripts/cleanup_duplicate_logclasses.py

You should see some duplicate entries reported if you have a problem to fix. Otherwise you’ll see “Found 0 duplicate log classes” and you can just run exit to exit the “fleio bash” container, run fleio restart and abort this procedure.

If you have duplicate entries exit the container and continue:

exit
  1. Export database

If you have Fleio version 2022.10.0 or newer, you have the fleio backup command available:

fleio backup now /home/fleio/fleio.sql.gz

otherwise use fleio mysqldump:

fleio mysqldump | gzip > fleio.sql.gz
  1. Drop and recreate database

fleio mysql

DROP DATABASE fleio;
CREATE DATABASE fleio CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;

exit
  1. Import database

Import the Fleio database that was exported on step 3:

# make sure you're already under the "fleio" user or run sudo -i -u fleio otherwise
db_pass=$(cat /home/fleio/compose/secrets/.db_password)
cd /home/fleio/compose
gunzip -c /home/fleio/fleio.sql.gz | docker compose exec -T db mysql fleio -u fleio -p"$db_pass"
  1. Start Fleio

fleio restart

This should permanently fix the problem. To confirm this, you can later run:

fleio bash
. ../env/bin/activate
python fleio/utils/scripts/cleanup_duplicate_logclasses.py

It should say “Found 0 duplicate log classes”.