MySQL NDB Cluster 8.0 Release Notes
Important Change; NDB Replication:
The replica_allow_batching
system variable affects how efficiently the replica applies
epoch transactions. When this is set to OFF
,
by default, every discrete replication event in the binary log
is applied and executed separately, which generally leads to
poor performance.
Beginning with this release, the default value for
replica_allow_batching
is changed from
OFF
to ON
.
NDB Cluster APIs:
Removed a number of potential memory leaks by using
std::uniqe_ptr
for managing any
Event
returned by
Dictionary::getEvent()
.
As part of this fix, we add a
releaseEvent()
method to Dictionary
to clean
up events created with getEvent()
after they
are no longer needed.
(Bug #33855045)
NDB Cluster APIs:
The Node.JS
package included with NDB Cluster
has been updated to version 16.5.0.
(Bug #33770627)
Empty lines in CSV files are now accepted as valid input by
ndb_import. (Previously, empty lines in such
files were always rejected.) Now, if an empty value can be used
as the value for a single imported column,
ndb_import uses it in the same manner as
LOAD DATA
.
(Bug #34119833)
NDB
stores blob column values
differently from other types; by default, only the first 256
bytes of the value are stored in the table
(“inline”), with any remainder kept in a separate
blob parts table. This is true for columns of MySQL type
BLOB
,
MEDIUMBLOB
, LONGBLOB
,
TEXT
,
MEDIUMTEXT
, and LONGTEXT
.
(TINYBLOB
and TINYTEXT
are
exceptions, since they are always inline-only.)
NDB
handles
JSON
column values in a similar
fashion, the only difference being that, for a
JSON
column, the first 4000 bytes of the
value are stored inline.
Previously, it was possible to control the inline size for blob
columns of NDB
tables only by using the NDB
API (Column::setInlineSize()
method). This now can be done in the mysql
client (or other application supplying an SQL interface) using a
column comment which consists of an
NDB_COLUMN
string containing a
BLOB_INLINE_SIZE
specification, as part of a
CREATE TABLE
statement like this
one:
CREATE TABLE t (
a BIGINT NOT NULL AUTO_INCREMENT PRIMARY KEY,
b BLOB COMMENT 'NDB_COLUMN=BLOB_INLINE_SIZE=3000'
) ENGINE NDBCLUSTER;
In table t
created by the statement just
shown, column b
(emphasized text in the
preceding example) is a BLOB
column whose
first 3000 bytes are stored in t
itself,
rather than just the first 256 bytes. This means that, if no
value stored in b
exceeds 3000 bytes in
length, no extra work is required to read or write any excess
data from the NDB
blob parts table when
storing or retrieving the column value. This can improve
performance significantly when performing many operations on
blob columns.
You can see the effects of this option by querying the
ndbinfo.blobs
table, or
examining the output of ndb_desc.
The maximum supported value for
BLOB_INLINE_SIZE
is 29980. Setting it to any
value less than 1 causes the default inline size to be used for
the column.
You can also alter a column as part of a copying
ALTER TABLE
;
ALGORITHM=INPLACE
is not supported for such
operations.
BLOB_INLINE_SIZE
can be used alone, or
together with MAX_BLOB_PART_SIZE
in the same
NDB_COMMENT
string. Unlike the case with
MAX_BLOB_PART_SIZE
, setting
BLOB_INLINE_SIZE
is supported for
JSON
columns of NDB
tables.
For more information, see NDB_COLUMN Options, as well as String Type Storage Requirements. (Bug #33755627, WL #15044)
A new --missing-ai-column
option is added to ndb_import. This enables
ndb_import to accept a CSV file from which
the data for an AUTO_INCREMENT
column is
missing and to supply these values itself, much as
LOAD DATA
does. This can be done
with one or more tables for which the CSV representation
contains no values for such a column.
This option works only when the CSV file contains no nonempty
values for the AUTO_INCREMENT
column to be
imported.
(Bug #102730, Bug #32553029)
This release adds Performance Schema instrumentation for
transaction batch memory used by NDBCLUSTER
,
making it possible to monitor memory used by transactions. For
more information, see
Transaction Memory Usage.
(WL #15073)
Important Change:
When using the
ThreadConfig
multithreaded data node parameter to specify the threads to be
created on the data nodes, the receive thread
(recv
) in some cases was placed in the same
worker thread as block threads such as
DBLQH(0)
and DBTC(0)
. This
represented a regression from NDB 8.0.22 and earlier, where the
receive thread is colocated only with THRMAN
and TRPMAN
, as expected in such cases.
Now, when setting the value of ThreadConfig
,
you must include main
,
rep
, recv
, and
ldm
explicitly; to avoid using one or more of
the main
, rep
, or
ldm
thread types, you must set
count=0
explicitly for each applicable thread
type.
In addition, a minimum value of 1
is now
enforced for the recv
count; setting the
replication thread (rep
) count to
1
also requires setting
count=1
for the main thread.
These changes can have serious implications for upgrades from
previous NDB Cluster releases. For more information, see
Upgrading and Downgrading NDB Cluster, as well as
the description of the
ThreadConfig
parameter, in the MySQL Manual.
(Bug #33869715)
References: See also: Bug #34038016, Bug #34025532.
macOS:
ndb_import could not be compiled on MacOS/ARM
because the ndbgeneral
library was not
explicitly included in LINK_LIBRARIES
.
(Bug #33931512)
NDB Disk Data:
The LGMAN
kernel block did
not initialize its local encrypted filesystem state, and did not
check
EncryptedFileSystem
for
undo log files, so that their encryption status was never
actually set.
This meant that, for release builds, it was possible for the undo log files to be encrypted on some systems, even though they should not have been; in debug builds, undo log files were always encrypted. This could lead to problems when using Disk Data tables and upgrading to or from NDB 8.0.29. (A workaround is to perform initial restarts of the data nodes when doing so.)
This issue could also cause unexpected CPU load for I/O threads when there were a great many Disk Data updates to write to the undo log, or at data node startup while reading the undo log.
The EncryptedFileSystem
parameter,
introduced in NDB 8.0.29, is considered experimental and is
not supported in production.
(Bug #34185524)
NDB Cluster APIs:
The internal function
NdbThread_SetThreadPrio()
sets the thread
priority (thread_prio
) for a given thread
type when applying the setting of the
ThreadConfig
configuration parameter. It was possible for this function in
some cases to return an error when it had actually succeeded,
which could have a an unfavorable impact on the performance of
some NDB API applications.
(Bug #34038630)
NDB Cluster APIs:
The following
NdbInterpretedCode
methods did
not function correctly when a nonzero value was employed for the
label
argument:
(Bug #33888962)
MySQL NDB ClusterJ: ClusterJ support for systems with ARM-based Apple silicon is now enabled by default. (Bug #34148474)
Compilation of NDB Cluster on Debian 11 and Ubuntu 22.04 halted during the link time optimization phase due to source code warnings being treated as errors. (Bug #34252425)
NDB
does not support in-place changes of
default values for columns; such changes can be made only by
using a copying ALTER TABLE
.
Changing of the default value in such cases was already
detected, but the additional or removal of default value was
not.
We fix this issue by detecting when default value is added or
removed during ALTER TABLE
, and refusing to
perform the operation in place.
(Bug #34224193)
After creating a user on SQL node A and granting it the
NDB_STORED_USER
privilege,
dropping this user from SQL node B led to inconsistent results.
In some cases, the drop was not distributed, so that after the
drop the user still existed on SQL node A.
The cause of this issue is that NDB
maintains
a cache of all local users with
NDB_STORED_USER
, but when a user was created
on SQL node B, this cache was not updated. Later, when executing
DROP USER
, this led SQL node B to
determine that the drop did not have to be distributed. We fix
this by ensuring that this cache is updated whenever a new
distributed user is created.
(Bug #34183149)
When the internal ndbd_exit()
function was
invoked on a data node, information and error messages sent to
the event logger just prior to the
ndbd_exit()
call were not printed in the log
as expected.
(Bug #34148712)
NDB Cluster did not compile correctly on Ubuntu 22.04 due to changes in OpenSSL 3.0. (Bug #34109171)
NDB Cluster would not compile correctly using GCC 8.4 due to a change in Bison fallthrough handling. (Bug #34098818)
Compiling the ndbcluster
plugin or the
libndbclient
library required a number of
files kept under directories specific to data nodes
(src/kernel
) and management servers
(src/mgmsrv
). These have now been moved to
more suitable locations. Files moved that may be of interest are
listed here:
ndbd_exit_codes.cpp
is moved to
storage/ndb/src/mgmapi
ConfigInfo.cpp
is moved to
storage/ndb/src/common/mgmcommon
mt_thr_config.cpp
is moved to
storage/ndb/src/common
NdbinfoTables.cpp
is moved to
storage/ndb/src/common/debugger
(Bug #34045289)
When an error occurred during the begin schema transaction phase, an attempt to update the index statistics automatically was made without releasing the transaction handle, leading to a leak. (Bug #34007422)
References: See also: Bug #34992370.
Path lengths were not always calculated correctly by the data nodes. (Bug #33993607)
When ndb_restore performed an NDB API
operation with any concurrent NDB API events taking place,
contention could occur in the event of limited resources in
DBUTIL
. This led to
temporary errors in NDB
. In such cases,
ndb_restore now attempts to retry the NDB API
operation which failed.
(Bug #33984717)
References: See also: Bug #33982499.
Removed a duplicate check of a table pointer found in the
internal method Dbtc::execSCAN_TABREQ()
.
(Bug #33945967)
The internal function
NdbReceiver::unpackRecAttr()
, which unpacks
attribute values from a buffer from a
GSN_TRANSID_AI
signal, did not check to
ensure that attribute sizes fit within the buffer. This could
corrupt the buffer which could in turn lead to reading beyond
the buffer and copying beyond destination buffers.
(Bug #33941167)
Improved formatting of log messages such that the format string verification employed by some compilers is no longer bypassed. (Bug #33930738)
Some NDB
internal signals were not always
checked properly.
(Bug #33896428)
Fixed a number of issues in the source that raised
-Wunused-parameter
warnings when compiling
NDB Cluster with GCC 11.2.
(Bug #33881953)
When an SQL node was not yet connected to
NDBCLUSTER
, an excessive number of warnings
were written to the MySQL error log when the SQL node could not
discover an NDB
table.
(Bug #33875273)
The NDB API statistics variables
Ndb_api_wait_nanos_count
,
Ndb_api_wait_nanos_count_replica
,
and
Ndb_api_wait_nanos_count_session
are used for determining CPU times and wait times for
applications. These counters are intended to show the time spent
waiting for responses from data nodes, but they were not
entirely accurate because time spent waiting for key requests
was not included.
For more information, see NDB API Statistics Counters and Variables. (Bug #33840016)
References: See also: Bug #33850590.
It was possible in some cases for a duplicate
engine
-se_private_id
entry
to be installed in the MySQL data dictionary for an
NDB
table, even when the previous table
definition should have been dropped.
When data nodes drop out of the cluster and need to rejoin, each
SQL node starts to synchronize the schema definitions in its own
data dictionary. The se_private_id
for an
NDB
table installed in the data dictionary is
the same as its NDB
table ID. It is common
for tables to be updated with different IDs, such as when
executing an ALTER TABLE
,
DROP TABLE
, or
CREATE TABLE
statement. The
previous table definition, obtained by referencing the table in
format, is usually sufficient for a drop and thus for the new
table to be installed with a new ID, since it is assumed that no
other installed table definition uses that ID. An exception to
this could occur during synchronization, if a data node shutdown
allowed the previous table definition of a table having the same
ID other than the one to be installed to remain, then the old
definition was not dropped.
schema
.table
To correct this issue, we now check whether the ID of the table to be installed in the data dictionary differs from that of the previous one, in which case we also check whether an old table definition already exists with that ID, and, if it does, we drop the old table before continuing. (Bug #33824058)
After receiving a COPY_FRAGREQ
signal,
DBLQH
sometimes places the
signal in a queue by copying the signal object into a stored
object. Problems could arise when this signal object was used to
send another signal before the incoming
COPY_FRAGREQ
was stored; this led to saving a
corrupt signal that, when sent, prevented a system restart from
completing. We fix this by using a static copy of the signal for
storage and retrieval, rather than the original signal object.
(Bug #33581144)
When the mysqld binary supplied with NDB
Cluster was run without NDB
support enabled,
the ndbinfo
and
ndb_transid_mysql_connection_map
plugins were
still enabled, and for example, still shown with status
ACTIVE
in the output of
SHOW PLUGINS
.
(Bug #33473346)
Attempting to seize a redo log page could in theory fail due to a wrong bound error. (Bug #32959887)
When a data node was started using the
--foreground
option, and with a
node ID not configured to connect from a valid host, the data
node underwent a forced shutdown instead of reporting an error.
(Bug #106962, Bug #34052740)
NDB
tables were skipped in the MySQL Server
upgrade phase and were instead migrated by the
ndbcluster
plugin at a later stage. As a
result, triggers associated with NDB
tables
were not created during upgrades from 5.7 based versions.
This occurred because it is not possible to create such triggers
when the NDB
tables are migrated by the
ndbcluster
plugin, since metadata about the
triggers is lost in the upgrade finalization phase of the MySQL
Server upgrade in which all .TRG
files are
deleted.
To fix this issue, we make the following changes:
Migration of NDB
tables with triggers is
no longer deferred during the Server upgrade phase.
NDB
tables with triggers are no longer
removed from the data dictionary during setup even when
initial system starts are detected.
(Bug #106883, Bug #34058984)
When initializing a file,
NDBFS
enabled autosync but
never called do_sync_after_write()
(then
called sync_on_write()
), so that the file was
never synchronized to disk until it was saved. This meant that,
for a system whose network disk was stalled for some time, the
file could use up system memory on buffered file data.
We fix this by calling do_sync_after_write()
each time NDBFS
writes to a file.
As part of this work, we increase the autosync size from 1 MB to 16 MB when initializing files.
NDB supports O_SYNC
on platforms that
provide it, but does not implement OM_SYNC
for opening files.
(Bug #106697, Bug #33946801, Bug #34131456)