MySQL NDB Cluster 7.6 Release Notes
MySQL NDB Cluster 7.6.7 is a new release of NDB 7.6, based on
MySQL Server 5.7 and including features in version 7.6 of the
NDB
storage engine, as well as fixing
recently discovered bugs in previous NDB Cluster releases.
Obtaining NDB Cluster 7.6. NDB Cluster 7.6 source code and binaries can be obtained from https://dev.mysql.com/downloads/cluster/.
For an overview of changes made in NDB Cluster 7.6, see What is New in NDB Cluster 7.6.
This release also incorporates all bug fixes and changes made in previous NDB Cluster releases, as well as all bug fixes and feature changes which were added in mainline MySQL 5.7 through MySQL 5.7.23 (see Changes in MySQL 5.7.23 (2018-07-27, General Availability)).
As part of ongoing work to improve handling of local checkpoints
and minimize the occurrence of issues relating to Error 410
(REDO log overloaded) during LCPs,
NDB
now implements adaptive LCP
control, which moderates LCP scan priority and LCP writes
according to redo log usage.
The following changes have been made with regard to
NDB
configuration parameters:
The default value of
RecoveryWork
is
increased from 50 to 60 (60% of storage reserved for LCP
files).
The new
InsertRecoveryWork
parameter controls the percentage of
RecoveryWork
that is reserved for insert
operations. The default value is 40 (40% of
RecoveryWork
); the minimum and maximum
are 0 and 70, respectively. Increasing this value allows for
more writes during an LCP, while limiting the total size of
the LCP. Decreasing InsertRecoveryWork
limits the number of writes used during an LCP, but results
in more space being used.
Implementing LCP control provides several benefits to
NDB
deployments. Clusters should now survive
heavy loads using default configurations much better than
previously, and it should now be possible to run them reliably
on systems where the available disk space is approximately 2.1
times the amount of memory allocated to the cluster (that is,
the amount of
DataMemory
) or more. It
is important to bear in mind that the figure just cited does not
account for disk space used by tables on disk.
During load testing into a single data node with decreasing redo log sizes, it was possible to successfully load a very large quantity of data into NDB with 16GB reserved for the redo log while using no more than 50% of the redo log at any point in time.
See What is New in NDB Cluster 7.6, as well as the descriptions of the parameters mentioned previously, for more information. (Bug #90709, Bug #27942974, Bug #27942583, WL #9638)
References: See also: Bug #27926532, Bug #27169282.
ndbinfo Information Database:
It was possible following a restart for (sometimes incomplete)
fallback data to be used in populating the
ndbinfo.processes
table, which
could lead to rows in this table with empty
process_name
values. Such fallback data is no
longer used for this purpose.
(Bug #27985339)
NDB Client Programs:
The executable file host_info
is no longer used by
ndb_setup.py. This file, along
with its parent directory
share/mcc/host_info
, has been removed from
the NDB Cluster distribution.
In addition, installer code relating to an unused
dojo.zip
file was removed.
(Bug #90743, Bug #27966467, Bug #27967561)
References: See also: Bug #27621546.
MySQL NDB ClusterJ: ClusterJ could not be built from source using JDK 9. (Bug #27977985)
An NDB
restore operation failed under the
following conditions:
A data node was restarted
The local checkpoint for the fragment being restored used
two .ctl
files
The first of these .ctl
files was the
file in use
The LCP in question consisted of more than 2002 parts
This happened because an array used in decompression of the
.ctl
file contained only 2002 elements,
which led to memory being overwritten, since this data can
contain up to 2048 parts. This issue is fixed by increasing the
size of the array to accommodate 2048 elements.
(Bug #28303209)
Local checkpoints did not always handle
DROP TABLE
operations correctly.
(Bug #27926532)
References: This issue is a regression of: Bug #26908347, Bug #26968613.
During the execution of
CREATE TABLE ...
IF NOT EXISTS
, the internal
open_table()
function calls
ha_ndbcluster::get_default_num_partitions()
implicitly whenever open_table()
finds out
that the requested table already exists. In certain cases,
get_default_num_partitions()
was called
without the associated thd_ndb
object being
initialized, leading to failure of the statement with MySQL
error 157 Could not connect to storage
engine. Now
get_default_num_partitions()
always checks
for the existence of this thd_ndb
object, and
initializes it if necessary.