MySQL NDB Cluster 8.4 Release Notes
This release is no longer available for download. It was removed due to a critical issue that could stop the server from restarting following the creation of a very large number of tables (8001 or more). Please upgrade to MySQL Cluster 8.4.2 instead.
Important Change: Now, when the removal of a data node file or directory fails with a file does not exist (ENOENT) error, this is treated as a successful removal.
ndbinfo Information Database:
Added a type
column to the
transporter_details
table in
the ndbinfo
information
database. This column shows the type of connection used by the
transporter, which is either of TCP
or
SHM
.
NDB Client Programs:
Added the --CA-days
option
to ndb_sign_keys to make it possible to
specify a certificate's lifetime.
(Bug #36549567)
NDB Client Programs: When started, ndbd now produces a warning in the data node log like this one:
2024-05-28 13:32:16 [ndbd] WARNING -- Running ndbd with a single thread of signal execution. For multi-threaded signal execution run the ndbmtd binary.
(Bug #36326896)
NDB Cluster APIs:
It was possible to employ the following NDB API methods without
them being used as const
, although this
alternative usage had long been deprecated (and was not actually
documented):
Now, each of these methods must always be invoked as
const
.
(Bug #36165876)
NDB Client Programs: ndb_redo_log_reader could not read data from encrypted files. (Bug #36313482)
NDB Client Programs: ndb_redo_log_reader exited with Record type = 0 not implemented when reaching an unused page, all zero bytes, or a page which was only partially used (typically a page consisting of the page header only). (Bug #36313259)
NDB Client Programs: ndb_restore did not restore a foreign key whose columns differed in order from those of the parent key.
Our thanks to Axel Svensson for the contribution. (Bug #114147, Bug #36345882)
The destructor for NDB_SCHEMA_OBJECT
makes
several assertions about the state of the schema object, but the
state was protected by a mutex, and the destructor did not
acquire this mutex before testing the state.
We fix this by acquiring the mutex within the destructor. (Bug #36568964)
NDB
now writes a message to the MySQL server
log before and after logging an incident in the binary log.
(Bug #36548269)
Removed a memory leak in
/util/NodeCertificate.cpp
.
(Bug #36537931)
Removed a memory leak from
src/ndbapi/NdbDictionaryImpl.cpp
.
(Bug #36532102)
The internal method
CertLifetime::set_set_cert_lifetime(
should set the not-before and
not-after times in the certificate to the same as those stored
in the X509
*cert
)CertLifetime
object, but instead it
set the not-before time to the current time, and the not-after
time to be of the same duration as the object.
(Bug #36514834)
Removed a possible use-after-free warning in
ConfigObject::copy_current()
.
(Bug #36497108)
When a thread acquires and releases the global schema lock required for schema changes and reads, the associated log message did not identify who performed the operation.
To fix this issue, we now do the following:
Prepend the message in the log with the identification of the NDB Cluster component or user session responsible.
Provide information about the related Performance Schema thread so that it can be traced.
(Bug #36446730)
References: See also: Bug #36446604.
Metadata changes were not logged with their associated thread IDs. (Bug #36446604)
References: See also: Bug #36446730.
When building NDB
using
lld, the build terminated
prematurely with the error message ld.lld: error:
version script assignment of 'local' to symbol 'my_init' failed:
symbol not defined while attempting to link
libndbclient.so
.
(Bug #36431274)
TLS did not fail cleanly on systems which used OpenSSL 1.0, which is unsupported. Now in such cases, users get a clear error message advising that an upgrade to OpenSSL 1.1 or later is required to use TLS with NDB Cluster. (Bug #36426461)
NDB Cluster's pushdown join functionality expects pushed
conditions to filter exactly, so that no rows that do not match
the condition must be returned, and all rows that do match the
condition must returned. When the condition contained a BINARY
value compared to a BINARY
column
this was not always true; if the value was shorter than the
column size, it could compare as equal to a column value despite
having different lengths, if the condition was pushed down to
NDB
.
Now, when deciding whether a condition is pushable, we also make
sure that the BINARY
value length exactly
matches the BINARY
column's size. In
addition, when binary string values were used in conditions with
BINARY
or VARBINARY
columns, the actual length of a given string value was not used
but rather an overestimate of its length. This is now changed;
this should allow more conditions comparing short string values
with VARBINARY
columns to be pushed down than
before this fix was made.
(Bug #36390313, Bug #36513270)
References: See also: Bug #36399759, Bug #36400256. This issue is a regression of: Bug #36364619.
Setting
AutomaticThreadConfig
and NumCPUs
when
running single-threaded data nodes (ndbd)
sometimes led to unrecoverable errors. Now
ndbd ignores settings for these parameters,
which are intended to apply only to multi-threaded data nodes
(ndbmtd).
(Bug #36388981)
Improved the error message returned when trying to add a primary
key to an NDBCLUSTER
table using
ALGORITHM=INPLACE
.
(Bug #36382071)
References: See also: Bug #30766579.
The handling of the LQH
operation pool which occurs as part of TC takeover skipped the
last element in either of the underlying physical pools (static
or dynamic). If this element was in use, holding an operation
record for a transaction belonging to a transaction coordinator
on the failed node, it was not returned, resulting in an
incomplete takeover which sometimes left operations behind. Such
operations interfered with subsequent transactions and the
copying process (CopyFrag
) used by the failed
node to recover.
To fix this problem, we avoid skipping the final record while
iterating through the LQH
operation records
during TC takeover.
(Bug #36363119)
When distribution awareness was not in use, the cluster tended to choose the same data node as the transaction coordinator repeatedly. (Bug #35840020, Bug #36554026)
In certain cases, management nodes were unable to allocate node IDs to restarted data and SQL nodes. (Bug #35658072)
Setting ODirect
in the
cluster's configuration caused excess logging when
verifying that ODirect
was actually settable
for all paths.
(Bug #34754817)
In some cases, when trying to perform an online add index
operation on an NDB
table with no explicit
primary key (see
Limitations of NDB online operations), the
resulting error message did not make the nature of the problem
clear.
(Bug #30766579)
References: See also: Bug #36382071.