» Published on
Closing this event, if any event related to this infrastructure issue arises again, a new incident post will be opened.
» UpdatedDeployments are currently impacted by the disk slowdown, our engineers are attempting to mitigate it.
» UpdatedThe incident has been mostly mitigated by our team and by Outscale's team. But higher latency can still be observed during 2 daily slots:
During this period it might happen that your database behaves slower than usual. In this case we encourage you to reach our support.
The source cause of these performance issues has been pinpointed to a bug in the firmware of a storage appliance from Outscale, impacting a subpart of the disks which are used by our infrastructure. We're in close contact with them, we'll update the ticket over time.
» UpdatedAll disks activity seem normal now, no more slowdown were detected since 00:25 last night.
» UpdatedThe server restarted correctly, performance are much better, we're observing the situation
» UpdatedThe recovery process is still ongoing, a server had to be restarted, business plans database will experience a failover, starter plans a short downtime.
» UpdatedPerformance is getting better on our cluster as our provider is continuing the migration. All databases are up and running. However, some databases might still have some minor performance issue.
» UpdatedOur provider Outscale identified faulty equipment. They are migrating our impacted data volumes to another equipment. The process will take some time. When this operation will be achieved, our team will keep monitoring the performance of our disks. We will keep working with Outscale to make sure we get the same performance as before the incident.
You can expect your impacted databases to get better performance in a few minutes.
» UpdatedWe are still investigating with Outscale the source of this issue. We are also working on some mitigation workaround to lessen the impact on your databases.
» UpdatedMongoDB's databases seem to be more impacted than other types of databases. If one member of the cluster is impacted by the disk performance issue, the whole cluster is slowed down to the slowest cluster member speed.
We are still actively working on the issue with our provider and will keep you updated when we have more information.
» UpdatedSome databases hosted on the osc-fr1 region suffer a degradation of performance. We are investigating I/O issue for some disks. We are currently investigating in close contact with our provider to solve this issue.
» Updated