MySQL NDB Cluster 8.4 Release Notes
MySQL NDB Cluster 8.4.5 is a new LTS release of NDB 8.4, based on
MySQL Server 8.4 and including features in version 8.4 of the
NDB
storage engine, as well as fixing
recently discovered bugs in previous NDB Cluster releases.
Obtaining MySQL NDB Cluster 8.4. NDB Cluster 8.4 source code and binaries can be obtained from https://dev.mysql.com/downloads/cluster/.
For an overview of major changes made in NDB Cluster 8.4, see What is New in MySQL NDB Cluster 8.4.
This release also incorporates all bug fixes and changes made in previous NDB Cluster releases, as well as all bug fixes and feature changes which were added in mainline MySQL 8.4 through MySQL 8.4.5 (see Changes in MySQL 8.4.5 (2025-04-15, LTS Release)).
The ability to build the source without NDB using the internal
script storage/ndb/compile-cluster
was
adversely affected by work done in NDB 8.0.31 making the
ndbcluster
plugin part of its default build.
(Bug #117215, Bug #37484376)
Added the
Ndb_schema_participant_count
status variable. This variable provides the count of MySQL
servers which are currently participating in NDB Cluster schema
change distribution.
(Bug #37529202)
NDB Replication: Timestamps written to the binary log included fractional parts (microseconds) although the query being logged did not use any high-precision functionality. This problem was caused by not resetting the state indicating that fractional parts had been used.
We fix this by ensuring that the indicator for the use of microseconds in a given query after having run it is always reset. This avoids the possibility of a later query writing a timestamp which includes microseconds to the binary log when the query as executed did not use microseconds. (Bug #37112446)
ndbinfo Information Database:
Certain queries against
ndbinfo
tables were not
handled correctly.
(Bug #37372650)
NDB Client Programs:
With a data node in an unstarted state, such as immediately
after executing
in the
ndb_mgm client, issuing
node_id
RESTART -n
ALL REPORT BACKUPSTATUS
in
the client subsequently led to an unplanned shutdown of the
cluster.
(Bug #37535513)
MySQL NDB ClusterJ: The ClusterJ log file only reported the configured, requested node ID for a cluster connection (which was often zero). With this fix, after a connection has been established, ClusterJ reports the actual assigned node ID in the log. (Bug #37556172)
MySQL NDB ClusterJ:
A potential circular reference from
NdbRecordSmartValueHandlerImpl
that can cause
delays in garbage collection has been removed.
(Bug #37361267)
MySQL NDB ClusterJ:
Setting the connection property
com.mysql.clusterj.byte.buffer.pool.sizes
to
"512, 51200
" caused ClusterJ application to
fail with a fatal exception thrown by
java.nio.ByteBuffer
.
(Bug #37188154)
MySQL NDB ClusterJ: When using a debug build of ClusterJ to run any tests in the testsuite, it exited with the error "1 thread(s) did not exit." (Bug #36383937)
MySQL NDB ClusterJ:
Running a ClusterJ application with Java 10 resulted in
java.lang.ClassNotFoundException
, because the
class java.internal.ref.Cleaner
is not
available in Java 10. With this fix, the
java.lang.ref.Cleaner
class is used instead
for resource cleanup.
(Bug #29931569)
API node failure is detected by one or more data nodes; data nodes detecting API node failure inform all other data nodes of the failure, eventually triggering API node failure handling on each data node.
Each data node handles API node failure independently; once all
internal blocks have completed cleanup, the API node failure is
considered handled, and, after a timed delay, the
QMGR
block allows the failed
API node's node ID to be used for new connections.
QMGR
monitors API node failure handling,
periodically generating warning logs for API node failure
handling that has not completed (approximately every 30
seconds). These logs indicate which blocks have yet to complete
failure handling.
This enhancement improves logging in handling stalls
particularly with regard to the
DBTC
block, which must roll
back or commit and complete the API node's transcations,
and release the associated COMMIT
and
ACK
markers. In addition, the time to wait
for API node failure handling is now configurable as the
ApiFailureHandlingTimeout
data node configuration parameter; after this number of seconds,
handling is escalated to a data node restart.
(Bug #37524092)
References: See also: Bug #37469364.
When a data node hangs during shutdown reasons for this may include: I/O problems on the node, in which case the thread shutting down hangs while operating on error and trace files; or an error in the shutdown logic, where the thread shutting down raises a Unix signal, and causes a deadlock. When such issues occur, users might observe watchdog warnings in the logs, refering to the last signal processed; this could be misleading in cases where there was actually a (different) preceding cause which had triggered the shutdown.
To help pinpoint the origin of such problems if they occur, we have made the following improvements:
Added a new watchdog state shutting down
.
This is set early enough in the error handling process that
it causes all watchdog logging of shutdown stalls to
attribute the delay to a shutdown delay (correctly) rather
than problem in execution.
We have also modified the watchdog mechanism to be aware of shutdown states, and use a more direct path—which is less likely to stall—to force the data node process to stop when needed.
(Bug #37518267)
When restoring NDB
tables from backup, it is
now possible for mysqld to open such tables
even if their indexes are not yet available.
(Bug #37516858)
Signal dump code run when handling an unplanned node shutdown sometimes exited unexpectedly when speculatively reading section IDs which might not be present. (Bug #37512526)
The LQH_TRANSCONF signal printer did not validate its input length correctly, which could lead the node process to exit. (Bug #37512477)
When restoring stored grants (using
ndb_restore
--include-stored-grants
)
from an NDB
backup following an initial
restart of the data nodes, the
ndb_sql_metadata
table was neither created
nor restored.
(Bug #37492169)
Nothing was written to the cluster log to indicate when
PURGE BINARY LOGS
had finished
waiting for the purge to complete.
(Bug #37489870)
WITH_NDB_TLS_SEARCH_PATH
was not
set when compiling NDB Cluster using
WITHOUT_SERVER
.
(Bug #37398657)
Now, when ndb_metadata_check
is
enabled, we synchronize both schema and tables in the same
interval.
(Bug #37382551)
This fix addresses the following two issues:
When a resend could not be started due to a gap created by an out of buffer error in the event stream, forwarding event data for the bucket was not initiated. We fix this by ensure that the takeover process has been initiated before exiting the resend code.
An out of buffer error which occurred during an ongoing resend was not handled. In this case, we now interrupt the resend when such an error is raised.
(Bug #37349305)
On certain rare occasions, when concurrent calls were made to
release()
and get()
,
instances of ndb_schema_object
were doubly
freed.
(Bug #35793818)
If an out-of-buffer-release (OOBR) process took an excessive
amount of time, the reset was performed prematurely, before all
buffers were released, thus interfering with concurrent seizing
of new pages, beginning new out_of_buffer
handling, or starting a resend.
We solve this issue by ensuring that resumption of event buffering takes place only after the OOBR process has completed for all buckets. (Bug #20648778)