MySQL NDB Cluster 7.5 Release Notes
MySQL NDB Cluster 7.5.9 is a new release of MySQL NDB Cluster 7.5,
based on MySQL Server 5.7 and including features in version 7.5 of
the NDB
storage engine, as well as
fixing recently discovered bugs in previous NDB Cluster releases.
Obtaining MySQL NDB Cluster 7.5. MySQL NDB Cluster 7.5 source code and binaries can be obtained from https://dev.mysql.com/downloads/cluster/.
For an overview of changes made in MySQL NDB Cluster 7.5, see What is New in NDB Cluster 7.5.
This release also incorporates all bug fixes and changes made in previous NDB Cluster releases, as well as all bug fixes and feature changes which were added in mainline MySQL 5.7 through MySQL 5.7.21 (see Changes in MySQL 5.7.21 (2018-01-15, General Availability)).
NDB Replication:
On an SQL node not being used for a replication channel with
sql_log_bin=0
it was possible
after creating and populating an NDB table for a table map event
to be written to the binary log for the created table with no
corresponding row events. This led to problems when this log was
later used by a slave cluster replicating from the mysqld where
this table was created.
Fixed this by adding support for maintaining a cumulative
any_value
bitmap for global checkpoint event
operations that represents bits set consistently for all rows of
a specific table in a given epoch, and by adding a check to
determine whether all operations (rows) for a specific table are
all marked as NOLOGGING
, to prevent the
addition of this table to the Table_map
held
by the binlog injector.
As part of this fix, the NDB API adds a new
getNextEventOpInEpoch3()
method which provides information about any
AnyValue
received by making it possible to
retrieve the cumulative any_value
bitmap.
(Bug #26333981)
A query against the
INFORMATION_SCHEMA.FILES
table
returned no results when it included an ORDER
BY
clause.
(Bug #26877788)
During a restart, DBLQH
loads redo log part
metadata for each redo log part it manages, from one or more
redo log files. Since each file has a limited capacity for
metadata, the number of files which must be consulted depends on
the size of the redo log part. These files are opened, read, and
closed sequentially, but the closing of one file occurs
concurrently with the opening of the next.
In cases where closing of the file was slow, it was possible for
more than 4 files per redo log part to be open concurrently;
since these files were opened using the
OM_WRITE_BUFFER
option, more than 4 chunks of
write buffer were allocated per part in such cases. The write
buffer pool is not unlimited; if all redo log parts were in a
similar state, the pool was exhausted, causing the data node to
shut down.
This issue is resolved by avoiding the use of
OM_WRITE_BUFFER
during metadata reload, so that
any transient opening of more than 4 redo log files per log file
part no longer leads to failure of the data node.
(Bug #25965370)
Following TRUNCATE TABLE
on an
NDB
table, its
AUTO_INCREMENT
ID was not reset on an SQL
node not performing binary logging.
(Bug #14845851)
In certain circumstances where multiple Ndb
objects were being used in parallel from an API node, the block
number extracted from a block reference in
DBLQH
was the same as that of a
SUMA
block even though the request was coming
from an API node. Due to this ambiguity,
DBLQH
mistook the request from the API node
for a request from a SUMA
block and failed.
This is fixed by checking node IDs before checking block
numbers.
(Bug #88441, Bug #27130570)
When the duplicate weedout algorithm was used for evaluating a semijoin, the result had missing rows. (Bug #88117, Bug #26984919)
References: See also: Bug #87992, Bug #26926666.
A table used in a loose scan could be used as a child in a pushed join query, leading to possibly incorrect results. (Bug #87992, Bug #26926666)
When representing a materialized semijoin in the query plan, the
MySQL Optimizer inserted extra QEP_TAB
and
JOIN_TAB
objects to represent access to the
materialized subquery result. The join pushdown analyzer did not
properly set up its internal data structures for these, leaving
them uninitialized instead. This meant that later usage of any
item objects referencing the materialized semijoin accessed an
initialized tableno
column when accessing a
64-bit tableno
bitmask, possibly referring to
a point beyond its end, leading to an unplanned shutdown of the
SQL node.
(Bug #87971, Bug #26919289)
When a data node was configured for locking threads to CPUs, it failed during startup with Failed to lock tid.
This was is a side effect of a fix for a previous issue, which
disabled CPU locking based on the version of the available
glibc
. The specific glibc
issue being guarded against is encountered only in response to
an internal NDB API call (Ndb_UnlockCPU()
)
not used by data nodes (and which can be accessed only through
internal API calls). The current fix enables CPU locking for
data nodes and disables it only for the relevant API calls when
an affected glibc
version is used.
(Bug #87683, Bug #26758939)
References: This issue is a regression of: Bug #86892, Bug #26378589.
The NDBFS
block's OM_SYNC
flag is intended to make sure that all FSWRITEREQ signals used
for a given file are synchronized, but was ignored by platforms
that do not support O_SYNC
, meaning that this
feature did not behave properly on those platforms. Now the
synchronization flag is used on those platforms that do not
support O_SYNC
.
(Bug #76975, Bug #21049554)