MySQL NDB Cluster 7.5 Release Notes
MySQL NDB Cluster 7.5.5 is a new release of MySQL NDB Cluster 7.5,
based on MySQL Server 5.7 and including features in version 7.5 of
the NDB
storage engine, as well as
fixing recently discovered bugs in previous NDB Cluster releases.
Obtaining MySQL NDB Cluster 7.5. MySQL NDB Cluster 7.5 source code and binaries can be obtained from https://dev.mysql.com/downloads/cluster/.
For an overview of changes made in MySQL NDB Cluster 7.5, see What is New in NDB Cluster 7.5.
This release also incorporates all bug fixes and changes made in previous NDB Cluster releases, as well as all bug fixes and feature changes which were added in mainline MySQL 5.7 through MySQL 5.7.17 (see Changes in MySQL 5.7.17 (2016-12-12, General Availability)).
Packaging:
NDB Cluster Auto-Installer RPM packages for SLES 12 failed due
to a dependency on python2-crypto
instead of
python-pycrypto
.
(Bug #25399608)
Packaging:
The RPM installer for the MySQL NDB Cluster
auto-installer
package had a dependency on
python2-crypt
instead of
python-crypt
.
(Bug #24924607)
Microsoft Windows:
Installation failed when the Auto-Installer
(ndb_setup.py) was run on a
Windows host that used Swedish as the system language. This was
due to system messages being issued using the
cp1252
character set; when these messages
contained characters that did not map directly to 7-bit ASCII
(such as the ä
character in
Tjänsten ... startar
), conversion to
UTF-8
—as expected by the Auto-Installer
web client—failed.
This fix has been tested only with Swedish as the system
language, but should work for Windows systems set to other
European languages that use the cp1252
character set.
(Bug #83870, Bug #25111830)
ndbinfo Information Database:
A number of inconsistencies and other issues had arisen
regarding ndbinfo
tables due
to manual copying of the table and view definitions in the
sources. Now the SQL statements for these are generated, for
consistency.
(Bug #23305078)
References: See also: Bug #25047951.
ndb_restore did not restore tables having
more than 341 columns correctly. This was due to the fact that
the buffer used to hold table metadata read from
.ctl
files was of insufficient size, so
that only part of the table descriptor could be read from it in
such cases. This issue is fixed by increasing the size of the
buffer used by ndb_restore for file reads.
(Bug #25182956)
References: See also: Bug #25302901.
No traces were written when ndbmtd received a
signal in any thread other than the main thread, due to the fact
that all signals were blocked for other threads. This issue is
fixed by the removal of SIGBUS
,
SIGFPE
, SIGILL
, and
SIGSEGV
signals from the list of signals
being blocked.
(Bug #25103068)
CMake
now avoids configuring the
-fexpensive-optimizations
option for GCC
versions for which the option triggers faulty shift-or
optimizations.
(Bug #24947597, Bug #83517)
The rand()
function was used to produce a
unique table ID and table version needed to identify a schema
operation distributed between multiple SQL nodes, relying on the
assumption that rand()
would never produce
the same numbers on two different instances of mysqld. It was
later determined that this is not the case, and that in fact it
is very likely for the same random numbers to be produced on all
SQL nodes.
This fix removes the usage of rand()
for
producing a unique table ID or version, and instead uses a
sequence in combination with the node ID of the coordinator.
This guarantees uniqueness until the counter for the sequence
wraps, which should be sufficient for this purpose.
The effects of this duplication could be observed as timeouts in
the log (for example NDB create db: waiting max 119
sec for distributing) when restarting multiple
mysqld processes simultaneously or nearly so,
or when issuing the same CREATE
DATABASE
or DROP
DATABASE
statement on multiple SQL nodes.
(Bug #24926009)
The ndb_show_tables utility did not display type information for hash maps or fully replicated triggers. (Bug #24383742)
Long message buffer exhaustion when firing immediate triggers
could result in row ID leaks; this could later result in
persistent RowId already allocated errors
(NDB
Error 899).
(Bug #23723110)
References: See also: Bug #19506859, Bug #13927679.
The NDB Cluster Auto-Installer did not show the user how to force an exit from the application (CTRL+C). (Bug #84235, Bug #25268310)
The NDB Cluster Auto-Installer failed to exit when it was unable to start the associated service. (Bug #84234, Bug #25268278)
The NDB Cluster Auto-Installer failed when the port specified by
the --port
option (or the default port 8081)
was already in use. Now in such cases, when the required port is
not available, the next 20 ports are tested in sequence, with
the first one available being used; only if all of these are in
use does the Auto-Installer fail.
(Bug #84233, Bug #25268221)
Multiples instances of the NDB Cluster Auto-Installer were not
detected. This could lead to inadvertent multiple deployments on
the same hosts, stray processes, and similar issues. This issue
is fixed by having the Auto-Installer create a PID file
(mcc.pid
), which is removed upon a
successful exit.
(Bug #84232, Bug #25268121)
when a parent NDB
table in a
foreign key relationship was updated, the update cascaded to a
child table as expected, but the change was not cascaded to a
child table of this child table (that is, to a grandchild of the
original parent). This can be illustrated using the tables
generated by the following CREATE
TABLE
statements:
CREATE TABLE parent( id INT PRIMARY KEY AUTO_INCREMENT, col1 INT UNIQUE, col2 INT ) ENGINE NDB; CREATE TABLE child( ref1 INT UNIQUE, FOREIGN KEY fk1(ref1) REFERENCES parent(col1) ON UPDATE CASCADE ) ENGINE NDB; CREATE TABLE grandchild( ref2 INT, FOREIGN KEY fk2(ref2) REFERENCES child(ref1) ON UPDATE CASCADE ) ENGINE NDB;
Table child
is a child of table
parent
; table grandchild
is a child of table child
, and a grandchild
of parent
. In this scenario, a change to
column col1
of parent
cascaded to ref1
in table
child
, but it was not always propagated in
turn to ref2
in table
grandchild
.
(Bug #83743, Bug #25063506)
The NDB
binlog injector thread used an
injector mutex to perform two important tasks:
Protect against client threads creating or dropping events
whenever the injector thread waited for
pollEvents()
.
Maintain access to data shared by the injector thread with client threads.
The first of these could hold the mutex for long periods of time (on the order of 10ms), while locking it again extremely quickly. This could keep it from obtaining the lock for data access (“starved”) for unnecessarily great lengths of time.
To address these problems, the injector mutex has been refactored into two—one to handle each of the two tasks just listed.
It was also found that initialization of the binlog injector thread held the injector mutex in several places unnecessarily, when only local thread data was being initialized and sent signals with condition information when nothing being waited for was updated. These unneeded actions have been removed, along with numerous previous temporary fixes for related injector mutex starvation issues. (Bug #83676, Bug #83127, Bug #25042101, Bug #24715897)
References: See also: Bug #82680, Bug #20957068, Bug #24496910.
When a data node running with
StopOnError
set to 0
underwent an unplanned shutdown, the automatic restart performed
the same type of start as the previous one. In the case where
the data node had previously been started with the
--initial
option, this meant that
an initial start was performed, which in cases of multiple data
node failures could lead to loss of data. This issue also
occurred whenever a data node shutdown led to generation of a
core dump. A check is now performed to catch all such cases, and
to perform a normal restart instead.
In addition, in cases where a failed data node was unable prior to shutting down to send start phase information to the angel process, the shutdown was always treated as a startup failure, also leading to an initial restart. This issue is fixed by adding a check to execute startup failure handling only if a valid start phase was received from the client. (Bug #83510, Bug #24945638)
When ndbmtd was built on Solaris/SPARC with version 5.3 of the GNU tools, data nodes using the resulting binary failed during startup. (Bug #83500, Bug #24941880)
References: See also: Bug #83517, Bug #24947597.
MySQL NDB Cluster failed to compile using GCC 6. (Bug #83308, Bug #24822203)
When a data node was restarted, the node was first stopped, and
then, after a fixed wait, the management server assumed that the
node had entered the NOT_STARTED
state, at
which point, the node was sent a start signal. If the node was
not ready because it had not yet completed stopping (and was
therefore not actually in NOT_STARTED
), the
signal was silently ignored.
To fix this issue, the management server now checks to see whether the data node has in fact reached the NOT_STARTED state before sending the start signal. The wait for the node to reach this state is split into two separate checks:
Wait for data nodes to start shutting down (maximum 12 seconds)
Wait for data nodes to complete shutting down and reach
NOT_STARTED
state (maximum 120 seconds)
If either of these cases times out, the restart is considered failed, and an appropriate error is returned. (Bug #49464, Bug #11757421)
References: See also: Bug #28728485.