MySQL NDB Cluster 7.5 Release Notes
MySQL NDB Cluster 7.5.31 is a new release of MySQL NDB Cluster
7.5, based on MySQL Server 5.7 and including features in version
7.5 of the NDB
storage engine, as
well as fixing recently discovered bugs in previous NDB Cluster
releases.
Obtaining MySQL NDB Cluster 7.5. MySQL NDB Cluster 7.5 source code and binaries can be obtained from https://dev.mysql.com/downloads/cluster/.
For an overview of changes made in MySQL NDB Cluster 7.5, see What is New in NDB Cluster 7.5.
This release also incorporates all bug fixes and changes made in previous NDB Cluster releases, as well as all bug fixes and feature changes which were added in mainline MySQL 5.7 through MySQL 5.7.43 (see Changes in MySQL 5.7.43 (2023-07-18, General Availability)).
Backups using NOWAIT
did not start following
a restart of the data node.
(Bug #35389533)
When deferred triggers remained pending for an uncommitted transaction, a subsequent transaction could waste resources performing unnecessary checks for deferred triggers; this could lead to an unplanned shutdown of the data node if the latter transaction had no committable operations.
This was because, in some cases, the control state was not
reinitialized for management objects used by
DBTC
.
We fix this by making sure that state initialization is performed for any such object before it is used. (Bug #35256375)
A pushdown join between queries featuring very large and
possibly overlapping IN()
and
NOT IN()
lists caused SQL nodes to exit
unexpectedly. One or more of the IN()
(or
NOT IN()
) operators required in excess of
2500 arguments to trigger this issue.
(Bug #35185670, Bug #35293781)
The buffers allocated for a key of size
MAX_KEY_SIZE
were of insufficient size.
(Bug #35155005)
Some calls made by the ndbcluster
handler to
push_warning_printf()
used severity level
ERROR
, which caused an assertion in debug
builds. This fix changes all such calls to use severity
WARNING
instead.
(Bug #35092279)
When a connection between a data node and an API or management node was established but communication was available only from the other node to the data node, the data node considered the other node “live”, since it was receiving heartbeats, but the other node did not monitor heartbeats and so reported no problems with the connection. This meant that the data node assumed wrongly that the other node was (fully) connected.
We solve this issue by having the API or management node side
begin to monitor data node liveness even before receiving the
first REGCONF
signal from it; the other node
sends a REGREQ
signal every 100 milliseconds,
and only if it receives no REGCONF
from the
data node in response within 60 seconds is the node reported as
disconnected.
(Bug #35031303)
The log contained a high volume of messages having the form
DICT: index index number
stats auto-update requested, logged by the
DBDICT
block each time it
received a report from DBTUX
requesting an update. These requests often occur in quick
succession during writes to the table, with the additional
possibility in this case that duplicate requests for updates to
the same index were being logged.
Now we log such messages just before DBDICT
actually performs the calculation. This removes duplicate
messages and spaces out messages related to different indexes.
Additional debug log messages are also introduced by this fix,
to improve visibility of the decisions taken and calculations
performed.
(Bug #34760437)