Skip to main content


MySQL server has gone away


!Friendica Support Has anyone experienced "MySQL server has gone away" ? Out of nowhere, with the mariadb still running and nothing more Friendica logs ?

2025-04-12T14:46:52Z app [ERROR]: DB Error {"code":2006,"error":"MySQL server has gone away","params":"SELECT id, baseurl FROM contact WHERE (uid = 0 AND nurl = 'http:
//programming.dev/c/programmer_humor') LIMIT 1"} - {"file":"Database.php","line":675,"function":"p","request-id":"67fa7af930636","stack":"Database::p (1494), Database::select (1
379), Database::selectFirst (420), DBA::selectFirst (552), Contact::getBasepath (591), Contact::isLocal (368), Contact::getByURL (1316), Contact::getIdForURL (185), Tag::getTarg
etType (1409), Processor::storeReceivers (961), Processor::processContent (485), Processor::createItem (726), Processor::createActivity (846), Receiver::routeActivities (750), R
eceiver::processActivity (156)","uid":"4ee643","process_id":2984}

Friendica Support reshared this.

in reply to Ⓜ3️⃣3️⃣ 🌌

@Ⓜ3️⃣3️⃣ 🌌 this usually means your CPU load is too high or the network latency is spiking (if it's a remote DB). In theory it can be that all your cores have a thread for too long and the sql server can't perform the task for friendica before the timeout.
in reply to silverwizard

Thanks for the hint, maybe, but I didn't get any high cpu load warning from monitoring. The database seems responsive during this error messages but the website went error 500.

I restarted Friendica and MariaDB containers because I don't know better (same host).

I will watch for cpu load and how actually mariadb respond to some hand made queries next time. Until now this happened only 1 time since 3 or 4 month running my instance anyway.

Friendica Support reshared this.

in reply to Ⓜ3️⃣3️⃣ 🌌

@Ⓜ3️⃣3️⃣ 🌌 yeah, correlating it is hard. I find thread saturation is the more common issue a lot. Your CPU has the speed, but too many things are competing for the the thread.
in reply to Ⓜ3️⃣3️⃣ 🌌

I used to get this a lot and discovered that the OOM Killer killed MariaDB often to protect the O/S when all memory was consumed. And I had 12 GB free memory after rebooting. Search syslog for "OOM" to see if the culprit is the OOM killer.

The cause was that I was using the standard memory manager in MariaDB which is very bad at preventing memory fragmentation. Although I had a lot of memory left, it was too fragmented to use. Given my instance has 330 active users, the DB is extremely busy, and it would be just a few days before I'd lose the database, constantly.

The solution was to change the memory manager to jamalloc. Now I no longer have this issue.

in reply to Jerry on Friendica

Good call 🤔 ! I seel OOM killing MariaDB's container at every point of time I got a downtime alert from web monitoring.

Well, 12Gb of RAM is not enough for a single user instance ? That's surprising, I will put a memory limit on MariaDB's container.

Friendica Support reshared this.

in reply to TekNo ⚝ aEvl

Like that in the db container section:

deploy:
resources:
limits:
memory: 6G # Hard limit (container will be killed if it exceeds this)
reservations:
memory: 4G # Soft limit (Docker tries to ensure at least this much)


At least now it will be killed at specific limits.

I tried adding that too but mariadb doesn't start (endless restart loop):

command: mariadbd --malloc-lib=/usr/lib/x86_64-linux-gnu/libjemalloc.so.2

Friendica Support reshared this.