Skip to content

Conversation

@vlad-lesin
Copy link
Contributor

In one of the practical cloud MariaDB setups, a server node accesses its
datadir over the network, but also has a fast local SSD storage for
temporary data. The content of such temporary storage is lost when the
server container is destroyed.

The commit uses this ephemeral fast local storage (SSD) as an extension of
the portion of InnoDB buffer pool (DRAM) that caches persistent data
pages. This cache is separated from the persistent storage of data files
and ib_logfile0 and ignored during backup.

The following system variables were introduced:

innodb_extended_buffer_pool_size - the size of external buffer pool
file, if it equals to 0, external buffer pool will not be used;

innodb_extended_buffer_pool_path - the path to external buffer pool
file.

If innodb_extended_buffer_pool_size is not equal to 0, external buffer
pool file will be (re)created on startup.

Only clean pages will be flushed to external buffer pool file. There is
no need to flush dirty pages, as such pages will become clean after
flushing, and then will be evicted when they reach the tail of LRU list.

The general idea of this commit is to flush clean pages to external
buffer pool file when they are evicted.

A page can be evicted either by transaction thread or by background
thread of page cleaner. In some cases transaction thread is waiting for
page cleaner thread to finish its job. We can't do flushing in external
buffer pool file when transaction threads are waiting for eviction,
that would heart performance. That's why the only case for flushing is
when page cleaner thread evicts pages in background and there are no
waiters. For this purpose buf_pool_t::done_flush_list_waiters_count
variable was introduced, we flush evicted clean pages only if the
variable is zeroed.

Clean pages are evicted in buf_flush_LRU_list_batch() to keep some
amount of pages in buffer pool's free list. That's why we flush every
second page to external buffer pool file, otherwise there could be not
enough amount of pages in free list to let transaction threads to
allocate buffer pool pages without page cleaner waiting. This might be
not a good solution, but this is enough for prototyping.

External buffer pool page is introduced to store information in buffer
pool page hash about the certain page can be read from external buffer
pool file. The first several members of such page must be the same as the
members of internal page. External page frame must be equal to the
certain value to distinguish external page from internal one. External
buffer pages are preallocated on startup in external pages array. We
could get rid of the frame in external page, and check if the page's
address belongs to the array to distinguish external and internal pages.

There are also external pages free and LRU lists. When some internal page
is decided to be flushed in external buffer pool file, a new external
page is allocated either from the head of external free list, or from
the tail of external LRU list. Both lists are protected with
buf_pool.mutex. It makes sense, because a page is removed from internal
LRU list during eviction under buf_pool.mutex.

Then internal page is locked and the allocated external page is attached
to io request for external buffer pool file, and when write request is
completed, the internal page is replaced with external one in page hash,
external page is pushed to the head of external LRU list and internal
page is unlocked. After internal page was removed from external free list,
it was not placed in external LRU, and placed there only
after write completion, so the page can't be used by the other threads
until write is completed.

Page hash chain get element function has additional template parameter,
which notifies the function if external pages must be ignored or not. We
don't ignore external pages in page hash in two cases, when some page is
initialized for read and when one is reinitialized for new page creating.

When an internal page is initialized for read and external page with the
same page id is found in page hash, the internal page is locked,
the external page in replaced with newly initialized internal page in the
page hash chain, the external page is removed from external LRU list and
attached to io request to external buffer pool file. When the io request
is completed, external page is returned to external free list,
internal page is unlocked. So during read external page absents in both
external LRU and free lists and can't be reused.

When an internal page is initialized for new page creating and external
pages with the same page id is found in page hash, we just remove external
page from the page hash chain and external LRU list and push it to the
head of external free list. So the external page can be used for future
flushing.

There is also some magic with watch sentinels. If watch is goint to be
set for the external page with the same page id in page hash, we replace
the external page with sentinel in page hash and attach external
page to the sentinel's frame. When such sentinel with attached external
page should be removed, we replace it with the attached external page in
page hash instead of just removing the sentinel. This idea is not fully
implemented, as the function, which allocates external pages, does not
take into account that page hash can contain sentinels with attached
external pages. And, anyway, this code must be removed if we cherry-pick
the commit to 11.* branch, as change buffer is already removed in those
versions.

The pages are flushed to and read from external buffer pool file with
the same manner as they are flushed to their spaces, i.e. compressed and
encrypted pages stay compressed and encrypted in external buffer pool
file.
  • The Jira issue number for this PR is: MDEV-______

Description

TODO: fill description here

Release Notes

TODO: What should the release notes say about this change?
Include any changed system variables, status variables or behaviour. Optionally list any https://mariadb.com/kb/ pages that need changing.

How can this PR be tested?

TODO: modify the automated test suite to verify that the PR causes MariaDB to behave as intended.
Consult the documentation on "Writing good test cases".

If the changes are not amenable to automated testing, please explain why not and carefully describe how to test manually.

Basing the PR against the correct MariaDB version

  • This is a new feature or a refactoring, and the PR is based against the main branch.
  • This is a bug fix, and the PR is based against the earliest maintained branch in which the bug can be reproduced.

PR quality check

  • I checked the CODING_STANDARDS.md file and my PR conforms to this where appropriate.
  • For any trivial modifications to the PR, I am ok with the reviewer making the changes themselves.

@vlad-lesin vlad-lesin requested a review from dr-m January 5, 2026 13:32
@CLAassistant
Copy link

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.
You have signed the CLA already but the status is still pending? Let us recheck it.

@vlad-lesin vlad-lesin force-pushed the 11.8-MDEV-31956-ext_buf_pool branch 2 times, most recently from ee7d993 to bc5b76d Compare January 11, 2026 20:19
Copy link
Contributor

@dr-m dr-m left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Before I review this deeper, could you please fix the build and the test innodb.ext_buf_pool on Microsoft Windows?

}
else
{
free_page:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

warning C4102: 'free_page': unreferenced label [C:\buildbot\workers\prod\amd64-windows-packages\build\storage\innobase\innobase.vcxproj]

This label needs to be enclosed in #ifndef DBUG_OFF.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Originally, some implementations of the C preprocessor did not allow any white space before the # character. It is customary to start the preprocessor #if and #endif at the first column.

@vlad-lesin vlad-lesin force-pushed the 11.8-MDEV-31956-ext_buf_pool branch 2 times, most recently from 96a3b95 to c13adf1 Compare January 14, 2026 09:20
In one of the practical cloud MariaDB setups, a server node accesses its
datadir over the network, but also has a fast local SSD storage for
temporary data. The content of such temporary storage is lost when the
server container is destroyed.

The commit uses this ephemeral fast local storage (SSD) as an extension of
the portion of InnoDB buffer pool (DRAM) that caches persistent data
pages. This cache is separated from the persistent storage of data files
and ib_logfile0 and ignored during backup.

The following system variables were introduced:

innodb_extended_buffer_pool_size - the size of external buffer pool
file, if it equals to 0, external buffer pool will not be used;

innodb_extended_buffer_pool_path - the path to external buffer pool
file.

If innodb_extended_buffer_pool_size is not equal to 0, external buffer
pool file will be created on startup.

Only clean pages will be flushed to external buffer pool file. There is
no need to flush dirty pages, as such pages will become clean after
flushing, and then will be evicted when they reach the tail of LRU list.

The general idea of this commit is to flush clean pages to external
buffer pool file when they are evicted.

A page can be evicted either by transaction thread or by background
thread of page cleaner. In some cases transaction thread is waiting for
page cleaner thread to finish its job. We can't do flushing in external
buffer pool file when transaction threads are waithing for eviction,
that would heart performance. That's why the only case for flushing is
when page cleaner thread evicts pages in background and there are no
waiters. For this purprose buf_pool_t::done_flush_list_waiters_count
variable was introduced, we flush evicted clean pages only if the
variable is zeroed.

Clean pages are evicted in buf_flush_LRU_list_batch() to keep some
amount of pages in buffer pool's free list. That's why we flush every
second page to external buffer pool file, otherwise there could be not
enought amount of pages in free list to let transaction threads to
allocate buffer pool pages without page cleaner waiting. This might be
not a good solution, but this is enought for prototyping.

External buffer pool page is introduced to store information in buffer
pool page hash about the certain page can be read from external buffer
pool file. The first several members of such page must be the same as the
members of internal page. External page frame must be equal to the
certain value to disthinguish external page from internal one. External
buffer pages are preallocated on startup in external pages array. We
could get rid of the frame in external page, and check if the page's
address belongs to the array to distinguish external and internal pages.

There are also external pages free and LRU lists. When some internal page
is decided to be flushed in external buffer pool file, a new external
page is allocated eighter from the head of external free list, or from
the tail of external LRU list. Both lists are protected with
buf_pool.mutex. It makes sense, because a page is removed from internal
LRU list during eviction under buf_pool.mutex.

Then internal page is locked and the allocated external page is attached
to io request for external buffer pool file, and when write request is
completed, the internal page is replaced with external one in page hash,
external page is pushed to the head of external LRU list and internal
page is unlocked. After internal page was removed from external free list,
it was not placed in external LRU, and placed there only
after write completion, so the page can't be used by the other threads
until write is completed.

Page hash chain get element function has additional template parameter,
which notifies the function if external pages must be ignored or not. We
don't ignore external pages in page hash in two cases, when some page is
initialized for read and when one is reinitialized for new page creating.

When an internal page is initialized for read and external page with the
same page id is found in page hash, the internal page is locked,
the external page in replaced with newly initialized internal page in the
page hash chain, the external page is removed from external LRU list and
attached to io request to external buffer pool file. When the io request
is completed, external page is returned to external free list,
internal page is unlocked. So during read external page absents in both
external LRU and free lists and can't be reused.

When an internal page is initialized for new page creating and external
pages with the same page id is found in page hash, we just remove external
page from the page hash chain and external LRU list and push it to the
head of external free list. So the external page can be used for future
flushing.

The pages are flushed to and read from external buffer pool file with
the same manner as they are flushed to their spaces, i.e. compressed and
encrypted pages stay compressed and encrypted in external buffer pool
file.
@vlad-lesin vlad-lesin force-pushed the 11.8-MDEV-31956-ext_buf_pool branch from c13adf1 to 99921cf Compare January 14, 2026 13:41
Fix some tests. Make ext_buf_pool test more stable avoiding race
conditions for read/write counters.
@vlad-lesin vlad-lesin force-pushed the 11.8-MDEV-31956-ext_buf_pool branch from 99921cf to 64eb035 Compare January 15, 2026 10:04
Fix Windows and liburing issues.
Comment on lines -35 to +37
`UNCOMPRESS_CURRENT` bigint(21) unsigned NOT NULL
`UNCOMPRESS_CURRENT` bigint(21) unsigned NOT NULL,
`NUMBER_PAGES_WRITTEN_TO_EXTERNAL_BUFFER_POOL` bigint(21) unsigned NOT NULL,
`NUMBER_PAGES_READ_FROM_EXTERNAL_BUFFER_POOL` bigint(21) unsigned NOT NULL
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The test rocksdb.innodb_i_s_tables_disabled needs to be adjusted as well.

include/my_sys.h Outdated
#define MY_IGNORE_ENOENT 32U /* my_delete() ignores ENOENT (no such file) */
#define MY_ENCRYPT 64U /* Encrypt IO_CACHE temporary files */
#define MY_TEMPORARY 64U /* create_temp_file(): delete file at once */
#define MY_OPEN_FOR_ASYNC_IO 1024U /* my_open() open file for async io */
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This new flag is not being set anywhere. It is being tested in mysys/my_winfile.c but never set.

Comment on lines +1 to +3
--source include/have_innodb.inc
--source include/have_debug.inc
--source include/have_debug_sync.inc
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we have a non-debug version of this test as well?

@@ -0,0 +1 @@
--innodb-buffer-pool-size=21M --innodb-extended-buffer-pool-size=1M
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Typically, the local SSD is larger than the RAM, and therefore it would be more interesting to test a scenario where the buffer pool is smaller than the extended buffer pool. I understand that our current CI setup is not suitable for any performance testing. But, currently we only cover this functionality in debug-instrumented executables.

Would it be possible to write some sort of a test, or extend an existing one (such as encryption.innochecksum) with a variant that would use the minimal innodb_buffer_pool_size and a larger innodb_extended_buffer_pool_size? Currently, that test is running with innodb_buffer_pool_size=64M.

Comment on lines +90 to +93
/* Disable external buffer pool flushing for the duration of double write
buffer creating, as double write pages will be removed from LRU */
++buf_pool.done_flush_list_waiters_count;
SCOPE_EXIT([]() { --buf_pool.done_flush_list_waiters_count; });
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of doing this, could we create the doublewrite buffer in a single atomic mini-transaction, like I did in 1d1699e so that the #4405 innodb_log_archive=ON recovery can work from the very beginning? I could create a separate pull request for that.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I filed #4554 for the refactoring, which I hope will remove the need for this work-around.

}
else
{
free_page:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Originally, some implementations of the C preprocessor did not allow any white space before the # character. It is customary to start the preprocessor #if and #endif at the first column.

@vlad-lesin vlad-lesin force-pushed the 11.8-MDEV-31956-ext_buf_pool branch from 30a83bf to db5856a Compare January 19, 2026 09:08
Use persistent named files for external buffer pool instead of temporary
one.
@vlad-lesin vlad-lesin force-pushed the 11.8-MDEV-31956-ext_buf_pool branch from db5856a to c78f5ac Compare January 19, 2026 09:11
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Development

Successfully merging this pull request may close these issues.

5 participants