MySQL NDB Cluster 7.6 Release Notes

2 Changes in MySQL NDB Cluster 7.6.34 (5.7.44-ndb-7.6.34) (2025-04-16, General Availability)

MySQL NDB Cluster 7.6.34 is a new release of NDB 7.6, based on MySQL Server 5.7 and including features in version 7.6 of the NDB storage engine, as well as fixing recently discovered bugs in previous NDB Cluster releases.

Obtaining NDB Cluster 7.6.  NDB Cluster 7.6 source code and binaries can be obtained from https://dev.mysql.com/downloads/cluster/.

For an overview of changes made in NDB Cluster 7.6, see What is New in NDB Cluster 7.6.

This release also incorporates all bug fixes and changes made in previous NDB Cluster releases, as well as all bug fixes and feature changes which were added in mainline MySQL 5.7 through MySQL 5.7.44 (see Changes in MySQL 5.7.44 (2023-10-25, General Availability)).

Bugs Fixed

  • InnoDB: Fixed an issue relating to reading index_id values. (Bug #36993445, Bug #37709706)

  • Replication: In some cases, MASTER_POS_WAIT() did not perform as expected. (Bug #36421684, Bug #37709187)

  • mysqldump did not escape certain special characters properly in its output. With this fix, mysqldump now follows the rules as described in String Literals. (Bug #37540722, Bug #37709163)

  • API node failure is detected by one or more data nodes; data nodes detecting API node failure inform all other data nodes of the failure, eventually triggering API node failure handling on each data node.

    Each data node handles API node failure independently; once all internal blocks have completed cleanup, the API node failure is considered handled, and, after a timed delay, the QMGR block allows the failed API node's node ID to be used for new connections.

    QMGR monitors API node failure handling, periodically generating warning logs for API node failure handling that has not completed (approximately every 30 seconds). These logs indicate which blocks have yet to complete failure handling.

    This enhancement improves logging in handling stalls particularly with regard to the DBTC block, which must roll back or commit and complete the API node's transcations, and release the associated COMMIT and ACK markers. In addition, the time to wait for API node failure handling is now configurable as the ApiFailureHandlingTimeout data node configuration parameter; after this number of seconds, handling is escalated to a data node restart. (Bug #37524092)

    References: See also: Bug #37469364.

  • When a data node hangs during shutdown reasons for this may include: I/O problems on the node, in which case the thread shutting down hangs while operating on error and trace files; or an error in the shutdown logic, where the thread shutting down raises a Unix signal, and causes a deadlock. When such issues occur, users might observe watchdog warnings in the logs, refering to the last signal processed; this could be misleading in cases where there was actually a (different) preceding cause which had triggered the shutdown.

    To help pinpoint the origin of such problems if they occur, we have made the following improvements:

    • Added a new watchdog state shutting down. This is set early enough in the error handling process that it causes all watchdog logging of shutdown stalls to attribute the delay to a shutdown delay (correctly) rather than problem in execution.

    • We have also modified the watchdog mechanism to be aware of shutdown states, and use a more direct path—which is less likely to stall—to force the data node process to stop when needed.

    (Bug #37518267)

  • Signal dump code run when handling an unplanned node shutdown sometimes exited unexpectedly when speculatively reading section IDs which might not be present. (Bug #37512526)

  • The LQH_TRANSCONF signal printer did not validate its input length correctly, which could lead the node process to exit. (Bug #37512477)

  • Removed an issue relating to invalid UTF8 values. (Bug #27618273, Bug #37709687)

  • Addressed an issue relating to an invalid identifier. (Bug #22958632, Bug #37709664)