S3 Test Run Report

DateJul 12, 2025 23:32
Duration2h 37m
Framework TestFlows 2.0.250110.1002922

Artifacts

Test artifacts can be found at https://altinity-build-artifacts.s3.amazonaws.com/index.html#0/143c05fcd433555c6563408d8c622fb757f91dfe/regression/aarch64/with_analyzer/zookeeper/without_thread_fuzzer/s3/gcs/

Attributes

projectAltinity/ClickHouse
project.id159717931
packagehttps://s3.amazonaws.com/altinity-build-artifacts/25.3/143c05fcd433555c6563408d8c622fb757f91dfe/package_aarch64/clickhouse-common-static_25.3.3.20186.altinityantalya_arm64.deb
version25.3.3.20186.altinityantalya
user.nameEnmk
repositoryhttps://github.com/Altinity/clickhouse-regression
commit.hash88c93f843cd48cd9defc6cec6b98d6b98f94adde
job.nameS3 (gcs)
job.retry1
job.urlhttps://github.com/Altinity/ClickHouse/actions/runs/16242745619
archaarch64
localTrue
clickhouse_versionNone
clickhouse_pathhttps://s3.amazonaws.com/altinity-build-artifacts/25.3/143c05fcd433555c6563408d8c622fb757f91dfe/package_aarch64/clickhouse-common-static_25.3.3.20186.altinityantalya_arm64.deb
as_binaryFalse
base_osNone
keeper_pathNone
zookeeper_versionNone
use_keeperFalse
stressFalse
collect_service_logsTrue
thread_fuzzerFalse
with_analyzerTrue
reuse_envFalse
storages['gcs']
minio_uriSecret(name='minio_uri')
minio_root_userSecret(name='minio_root_user')
minio_root_passwordSecret(name='minio_root_password')
aws_s3_bucketSecret(name='aws_s3_bucket')
aws_s3_regionSecret(name='aws_s3_region')
aws_s3_key_idSecret(name='aws_s3_key_id')
aws_s3_access_keySecret(name='aws_s3_access_key')
gcs_uriSecret(name='gcs_uri')
gcs_key_idSecret(name='gcs_key_id')
gcs_key_secretSecret(name='gcs_key_secret')
azure_account_nameSecret(name='azure_account_name')
azure_storage_keySecret(name='azure_storage_key')
azure_containerSecret(name='azure_container')

Summary

89.8%OK
7.4%Known

Statistics

Units Skip OK Fail Error XFail XError
Modules
1
1
Features
15
2
13
Scenarios
198
12
144
38
4
Checks
54
54
Examples
54
2
52
Steps
30336
30
30274
13
19

Known Fails

Test NameResultMessage
/s3/gcs/part 1/invalid table function/invalid pathXError 30s 50ms
https://github.com/ClickHouse/ClickHouse/issues/59084
ExpectTimeoutError
Traceback (most recent call last):
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/venv/lib/python3.12/site-packages/testflows/uexpect/uexpect.py", line 223, in read
    d = self.queue.get(timeout=timeleft)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/queue.py", line 179, in get
    raise Empty
_queue.Empty

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/venv/lib/python3.12/site-packages/testflows/uexpect/uexpect.py", line 187, in expect
    data = self.read(timeout=min(timeleft, 0.1), raise_exception=True)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/venv/lib/python3.12/site-packages/testflows/uexpect/uexpect.py", line 235, in read
    raise TimeoutError(timeout)
testflows.uexpect.uexpect.TimeoutError: Timeout 0.038s

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 799, in 
    regression()
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../helpers/argparser.py", line 171, in capture_cluster_args
    return func(self, cluster_args=cluster_args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../helpers/argparser.py", line 348, in capture_s3_args
    return func(self, s3_args=s3_args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 771, in regression
    Feature(test=gcs_regression)(
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 714, in gcs_regression
    Feature(test=load("s3.tests.table_function_invalid", "gcs"))(uri=uri)
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/table_function_invalid.py", line 411, in gcs
    outline(uri=uri)
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/table_function_invalid.py", line 395, in outline
    scenario()
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/table_function_invalid.py", line 114, in invalid_path
    insert_to_s3_function_invalid(
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/table_function_invalid.py", line 34, in insert_to_s3_function_invalid
    node.query(query, message=message, exitcode=exitcode, timeout=timeout)
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../helpers/cluster.py", line 1140, in query
    r = self.cluster.bash(self.name)(command, *args, **kwargs)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
testflows.uexpect.uexpect.ExpectTimeoutError: Timeout 30.000s for '(bash# )\|(\n)'
/s3/gcs/part 1/disk/cacheXFail 2ms
Under development for 22.8 and newer.
None
/s3/gcs/part 1/disk/cache defaultXFail 1ms
Under development for 22.8 and newer.
None
/s3/gcs/part 1/disk/cache pathXFail 1ms
Under development for 22.8 and newer.
None
/s3/gcs/part 1/disk/low cardinality offsetXFail 52s 383ms
https://github.com/ClickHouse/ClickHouse/pull/44875
AssertionError
Traceback (most recent call last):
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 799, in 
    regression()
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../helpers/argparser.py", line 171, in capture_cluster_args
    return func(self, cluster_args=cluster_args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../helpers/argparser.py", line 348, in capture_s3_args
    return func(self, s3_args=s3_args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 771, in regression
    Feature(test=gcs_regression)(
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 715, in gcs_regression
    Feature(test=load("s3.tests.disk", "gcs"))(uri=uri, bucket_prefix=bucket_prefix)
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/disk.py", line 2560, in gcs
    disk_tests(uri=uri, bucket_prefix=bucket_prefix)
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/disk.py", line 2538, in disk_tests
    scenario()
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/disk.py", line 2500, in low_cardinality_offset
    assert output == "23999\n", error()
           ^^^^^^^^^^^^^^^^^^^
AssertionError: Oops! Assertion failed

The following assertion was not satisfied
  assert output == "23999\n", error()

Assertion values
  assert output == "23999\n", error()
         ^ is '23999'
  assert output == "23999\n", error()
                ^ is = False

  assert output == "23999\n", error()
  ^ is False

Where
  File '/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/disk.py', line 2500 in 'low_cardinality_offset'

2492\|                      "1",
2493\|                  ),
2494\|                  (
2495\|                      "merge_tree_min_bytes_for_concurrent_read_for_remote_filesystem",
2496\|                      "1",
2497\|                  ),
2498\|              ],
2499\|          ).output
2500\|>         assert output == "23999\n", error()
2501\|  
2502\|  
2503\|  @TestFeature
/s3/gcs/part 1/disk/no restartXFail 1ms
https://github.com/ClickHouse/ClickHouse/issues/58924
None
/s3/gcs/part 1/invalid disk/cache path conflictXFail 2ms
Under development for 22.8 and newer.
None
/s3/gcs/part 2/combinatoric table/engine=VersionedCollapsingMergeTree,replicated=True,n_cols=2000,n_tables=3,part_type=wideXFail 4m 25s
Needs investigation, rows not appearing
AssertionError
Traceback (most recent call last):
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 799, in 
    regression()
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../helpers/argparser.py", line 171, in capture_cluster_args
    return func(self, cluster_args=cluster_args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../helpers/argparser.py", line 348, in capture_s3_args
    return func(self, s3_args=s3_args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 771, in regression
    Feature(test=gcs_regression)(
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 721, in gcs_regression
    Feature(test=load("s3.tests.combinatoric_table", "feature"))(uri=uri)
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/combinatoric_table.py", line 182, in feature
    Scenario(title, test=check_table_combination)(**table_config)
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/combinatoric_table.py", line 149, in check_table_combination
    retry(assert_row_count, timeout=60, delay=5)(
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/common.py", line 1742, in assert_row_count
    assert rows == actual_count, error()
           ^^^^^^^^^^^^^^^^^^^^
AssertionError: Oops! Assertion failed

The following assertion was not satisfied
  assert rows == actual_count, error()

Assertion values
  assert rows == actual_count, error()
         ^ is 1500
  assert rows == actual_count, error()
                 ^ is 0
  assert rows == actual_count, error()
              ^ is = False
  assert rows == actual_count, error()
  ^ is False

Where
  File '/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/common.py', line 1742 in 'assert_row_count'

1734\|  
1735\|  @TestStep(Then)
1736\|  def assert_row_count(self, node, table_name: str, rows: int = 1000000):
1737\|      """Assert that the number of rows in a table is as expected."""
1738\|      if node is None:
1739\|          node = current().context.node
1740\|  
1741\|      actual_count = get_row_count(node=node, table_name=table_name)
1742\|>     assert rows == actual_count, error()
1743\|  
1744\|  
1745\|  @TestStep(Then)
/s3/gcs/part 2/combinatoric table/engine=AggregatingMergeTree,replicated=True,n_cols=2000,n_tables=3,part_type=compactXError 8m 2s
Times out, needs investigation
ExpectTimeoutError
Traceback (most recent call last):
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/venv/lib/python3.12/site-packages/testflows/uexpect/uexpect.py", line 223, in read
    d = self.queue.get(timeout=timeleft)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/queue.py", line 179, in get
    raise Empty
_queue.Empty

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/venv/lib/python3.12/site-packages/testflows/uexpect/uexpect.py", line 187, in expect
    data = self.read(timeout=min(timeleft, 0.1), raise_exception=True)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/venv/lib/python3.12/site-packages/testflows/uexpect/uexpect.py", line 235, in read
    raise TimeoutError(timeout)
testflows.uexpect.uexpect.TimeoutError: Timeout 0.093s

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 799, in 
    regression()
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../helpers/argparser.py", line 171, in capture_cluster_args
    return func(self, cluster_args=cluster_args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../helpers/argparser.py", line 348, in capture_s3_args
    return func(self, s3_args=s3_args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 771, in regression
    Feature(test=gcs_regression)(
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 721, in gcs_regression
    Feature(test=load("s3.tests.combinatoric_table", "feature"))(uri=uri)
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/combinatoric_table.py", line 182, in feature
    Scenario(title, test=check_table_combination)(**table_config)
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/combinatoric_table.py", line 117, in check_table_combination
    table = create_test_table(
            ^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/combinatoric_table.py", line 81, in create_test_table
    yield create_table(
          ^^^^^^^^^^^^^
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../helpers/tables.py", line 482, in create_table
    node.query(
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../helpers/cluster.py", line 1116, in query
    r = self.cluster.bash(None)(command, *args, **kwargs)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
testflows.uexpect.uexpect.ExpectTimeoutError: Timeout 300.000s for '(bash# )\|(\n)'
/s3/gcs/part 2/combinatoric table/engine=SummingMergeTree,replicated=True,n_cols=2000,n_tables=3,part_type=compactXError 8m 2s
Times out, needs investigation
ExpectTimeoutError
Traceback (most recent call last):
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/venv/lib/python3.12/site-packages/testflows/uexpect/uexpect.py", line 223, in read
    d = self.queue.get(timeout=timeleft)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/queue.py", line 179, in get
    raise Empty
_queue.Empty

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/venv/lib/python3.12/site-packages/testflows/uexpect/uexpect.py", line 187, in expect
    data = self.read(timeout=min(timeleft, 0.1), raise_exception=True)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/venv/lib/python3.12/site-packages/testflows/uexpect/uexpect.py", line 235, in read
    raise TimeoutError(timeout)
testflows.uexpect.uexpect.TimeoutError: Timeout 0.088s

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 799, in 
    regression()
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../helpers/argparser.py", line 171, in capture_cluster_args
    return func(self, cluster_args=cluster_args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../helpers/argparser.py", line 348, in capture_s3_args
    return func(self, s3_args=s3_args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 771, in regression
    Feature(test=gcs_regression)(
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 721, in gcs_regression
    Feature(test=load("s3.tests.combinatoric_table", "feature"))(uri=uri)
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/combinatoric_table.py", line 182, in feature
    Scenario(title, test=check_table_combination)(**table_config)
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/combinatoric_table.py", line 117, in check_table_combination
    table = create_test_table(
            ^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/combinatoric_table.py", line 81, in create_test_table
    yield create_table(
          ^^^^^^^^^^^^^
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../helpers/tables.py", line 482, in create_table
    node.query(
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../helpers/cluster.py", line 1116, in query
    r = self.cluster.bash(None)(command, *args, **kwargs)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
testflows.uexpect.uexpect.ExpectTimeoutError: Timeout 300.000s for '(bash# )\|(\n)'
/s3/gcs/part 2/combinatoric table/engine=MergeTree,replicated=True,n_cols=10,n_tables=3,part_type=unspecifiedXError 5m 57s
Times out, needs investigation
ExpectTimeoutError
Traceback (most recent call last):
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/venv/lib/python3.12/site-packages/testflows/uexpect/uexpect.py", line 223, in read
    d = self.queue.get(timeout=timeleft)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/queue.py", line 179, in get
    raise Empty
_queue.Empty

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/venv/lib/python3.12/site-packages/testflows/uexpect/uexpect.py", line 187, in expect
    data = self.read(timeout=min(timeleft, 0.1), raise_exception=True)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/venv/lib/python3.12/site-packages/testflows/uexpect/uexpect.py", line 235, in read
    raise TimeoutError(timeout)
testflows.uexpect.uexpect.TimeoutError: Timeout 0.095s

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 799, in 
    regression()
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../helpers/argparser.py", line 171, in capture_cluster_args
    return func(self, cluster_args=cluster_args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../helpers/argparser.py", line 348, in capture_s3_args
    return func(self, s3_args=s3_args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 771, in regression
    Feature(test=gcs_regression)(
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 721, in gcs_regression
    Feature(test=load("s3.tests.combinatoric_table", "feature"))(uri=uri)
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/combinatoric_table.py", line 182, in feature
    Scenario(title, test=check_table_combination)(**table_config)
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/combinatoric_table.py", line 117, in check_table_combination
    table = create_test_table(
            ^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/combinatoric_table.py", line 81, in create_test_table
    yield create_table(
          ^^^^^^^^^^^^^
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../helpers/tables.py", line 482, in create_table
    node.query(
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../helpers/cluster.py", line 1140, in query
    r = self.cluster.bash(self.name)(command, *args, **kwargs)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
testflows.uexpect.uexpect.ExpectTimeoutError: Timeout 300.000s for '(bash# )\|(\n)'

Results

Test Name Result Duration
/s3 OK 2h 37m
/s3/gcs OK 2h 37m
/s3/gcs/part 1 OK 1h 11m
/s3/gcs/part 1/sanity OK 1m 7s
/s3/gcs/part 1/sanity/sanity OK 16s 940ms
/s3/gcs/part 1/table function OK 8m 10s
/s3/gcs/part 1/table function/auto OK 1m 34s
/s3/gcs/part 1/table function/compression OK 1m 34s
/s3/gcs/part 1/table function/credentials OK 1s 77ms
/s3/gcs/part 1/table function/credentials s3Cluster OK 9s 440ms
/s3/gcs/part 1/table function/data format OK 1m 54s
/s3/gcs/part 1/table function/measure file size Skip 1ms
/s3/gcs/part 1/table function/measure file size s3Cluster Skip 915us
/s3/gcs/part 1/table function/multipart OK 7s 882ms
/s3/gcs/part 1/table function/multiple columns OK 1s 343ms
/s3/gcs/part 1/table function/partition OK 1s 297ms
/s3/gcs/part 1/table function/partition s3Cluster OK 8s 622ms
/s3/gcs/part 1/table function/remote host filter OK 51s 84ms
/s3/gcs/part 1/table function/syntax OK 1s 155ms
/s3/gcs/part 1/table function/syntax s3Cluster OK 10s 174ms
/s3/gcs/part 1/table function/wildcard OK 1m 33s
/s3/gcs/part 1/invalid table function OK 35s 88ms
/s3/gcs/part 1/invalid table function/empty path OK 269ms
/s3/gcs/part 1/invalid table function/empty structure OK 270ms
/s3/gcs/part 1/invalid table function/invalid bucket OK 278ms
/s3/gcs/part 1/invalid table function/invalid compression OK 282ms
/s3/gcs/part 1/invalid table function/invalid credentials OK 511ms
/s3/gcs/part 1/invalid table function/invalid format OK 542ms
/s3/gcs/part 1/invalid table function/invalid path XError 30s 50ms
/s3/gcs/part 1/invalid table function/invalid region Skip 739us
/s3/gcs/part 1/invalid table function/invalid structure OK 1s 754ms
/s3/gcs/part 1/invalid table function/invalid wildcard OK 1s 106ms
/s3/gcs/part 1/disk OK 41m 12s
/s3/gcs/part 1/disk/access OK 53s 324ms
/s3/gcs/part 1/disk/access skip check OK 51s 597ms
/s3/gcs/part 1/disk/add storage OK 1m 46s
/s3/gcs/part 1/disk/alter move OK 1m 5s
/s3/gcs/part 1/disk/alter on cluster modify ttl OK 1m 56s
/s3/gcs/part 1/disk/cache XFail 2ms
/s3/gcs/part 1/disk/cache default XFail 1ms
/s3/gcs/part 1/disk/cache path XFail 1ms
/s3/gcs/part 1/disk/compact parts OK 52s 300ms
/s3/gcs/part 1/disk/config over restart OK 1m 20s
/s3/gcs/part 1/disk/default move factor OK 1m 28s
/s3/gcs/part 1/disk/delete OK 3m 27s
/s3/gcs/part 1/disk/download appropriate disk OK 1m 28s
/s3/gcs/part 1/disk/drop sync OK 1m 1s
/s3/gcs/part 1/disk/environment credentials Skip 1ms
/s3/gcs/part 1/disk/exports OK 59s 710ms
/s3/gcs/part 1/disk/generic url Skip 1ms
/s3/gcs/part 1/disk/imports OK 55s 864ms
/s3/gcs/part 1/disk/log OK 3m 17s
/s3/gcs/part 1/disk/low cardinality offset XFail 52s 383ms
/s3/gcs/part 1/disk/max single part upload size syntax OK 54s 247ms
/s3/gcs/part 1/disk/mergetree OK 4m 6s
/s3/gcs/part 1/disk/mergetree collapsing OK 1m 2s
/s3/gcs/part 1/disk/mergetree versionedcollapsing OK 1m 2s
/s3/gcs/part 1/disk/metadata OK 1m 0s
/s3/gcs/part 1/disk/min bytes for seek syntax OK 53s 988ms
/s3/gcs/part 1/disk/multiple storage OK 1m 3s
/s3/gcs/part 1/disk/multiple storage query OK 1m 4s
/s3/gcs/part 1/disk/no restart XFail 1ms
/s3/gcs/part 1/disk/perform ttl move on insert OK 1m 45s
/s3/gcs/part 1/disk/perform ttl move on insert default OK 1m 2s
/s3/gcs/part 1/disk/performance ttl move OK 1m 16s
/s3/gcs/part 1/disk/remote host filter OK 1m 44s
/s3/gcs/part 1/disk/specific url Skip 3ms
/s3/gcs/part 1/disk/syntax OK 1m 3s
/s3/gcs/part 1/disk/wide parts OK 54s 802ms
/s3/gcs/part 1/invalid disk OK 3m 49s
/s3/gcs/part 1/invalid disk/access default OK 9s 742ms
/s3/gcs/part 1/invalid disk/access failed OK 10s 761ms
/s3/gcs/part 1/invalid disk/access failed skip check OK 51s 123ms
/s3/gcs/part 1/invalid disk/cache path conflict XFail 2ms
/s3/gcs/part 1/invalid disk/empty endpoint OK 10s 801ms
/s3/gcs/part 1/invalid disk/invalid endpoint OK 2m 5s
/s3/gcs/part 1/invalid disk/invalid type OK 21s 504ms
/s3/gcs/part 1/alter OK 16m 45s
/s3/gcs/part 1/alter/normal OK 3m 59s
/s3/gcs/part 1/alter/normal/attach from OK 14s 324ms
/s3/gcs/part 1/alter/normal/columns OK 11s 742ms
/s3/gcs/part 1/alter/normal/detach OK 26s 476ms
/s3/gcs/part 1/alter/normal/drop OK 52s 353ms
/s3/gcs/part 1/alter/normal/fetch OK 28s 678ms
/s3/gcs/part 1/alter/normal/freeze OK 23s 102ms
/s3/gcs/part 1/alter/normal/index OK 11s 17ms
/s3/gcs/part 1/alter/normal/move to table OK 18s 6ms
/s3/gcs/part 1/alter/normal/order by OK 7s 478ms
/s3/gcs/part 1/alter/normal/projection OK 12s 586ms
/s3/gcs/part 1/alter/normal/replace OK 17s 776ms
/s3/gcs/part 1/alter/normal/sample by OK 7s 451ms
/s3/gcs/part 1/alter/normal/update delete OK 8s 890ms
/s3/gcs/part 1/alter/encrypted OK 5m 30s
/s3/gcs/part 1/alter/encrypted/attach from OK 16s 929ms
/s3/gcs/part 1/alter/encrypted/columns OK 18s 749ms
/s3/gcs/part 1/alter/encrypted/detach OK 35s 771ms
/s3/gcs/part 1/alter/encrypted/drop OK 1m 12s
/s3/gcs/part 1/alter/encrypted/fetch OK 39s 268ms
/s3/gcs/part 1/alter/encrypted/freeze OK 30s 711ms
/s3/gcs/part 1/alter/encrypted/index OK 16s 53ms
/s3/gcs/part 1/alter/encrypted/move to table OK 29s 364ms
/s3/gcs/part 1/alter/encrypted/order by OK 8s 280ms
/s3/gcs/part 1/alter/encrypted/projection OK 17s 690ms
/s3/gcs/part 1/alter/encrypted/replace OK 20s 968ms
/s3/gcs/part 1/alter/encrypted/sample by OK 8s 306ms
/s3/gcs/part 1/alter/encrypted/update delete OK 16s 21ms
/s3/gcs/part 1/alter/zero copy OK 2m 14s
/s3/gcs/part 1/alter/zero copy/attach from OK 13s 311ms
/s3/gcs/part 1/alter/zero copy/columns OK 19s 197ms
/s3/gcs/part 1/alter/zero copy/detach Skip 1ms
/s3/gcs/part 1/alter/zero copy/drop Skip 1ms
/s3/gcs/part 1/alter/zero copy/fetch Skip 1ms
/s3/gcs/part 1/alter/zero copy/freeze OK 23s 220ms
/s3/gcs/part 1/alter/zero copy/index OK 10s 846ms
/s3/gcs/part 1/alter/zero copy/move to table OK 18s 254ms
/s3/gcs/part 1/alter/zero copy/order by OK 7s 339ms
/s3/gcs/part 1/alter/zero copy/projection OK 10s 509ms
/s3/gcs/part 1/alter/zero copy/replace OK 15s 169ms
/s3/gcs/part 1/alter/zero copy/sample by OK 7s 486ms
/s3/gcs/part 1/alter/zero copy/update delete OK 9s 33ms
/s3/gcs/part 1/alter/zero copy encrypted OK 3m 13s
/s3/gcs/part 1/alter/zero copy encrypted/attach from OK 16s 546ms
/s3/gcs/part 1/alter/zero copy encrypted/columns OK 20s 322ms
/s3/gcs/part 1/alter/zero copy encrypted/detach Skip 1ms
/s3/gcs/part 1/alter/zero copy encrypted/drop Skip 1ms
/s3/gcs/part 1/alter/zero copy encrypted/fetch Skip 995us
/s3/gcs/part 1/alter/zero copy encrypted/freeze OK 31s 208ms
/s3/gcs/part 1/alter/zero copy encrypted/index OK 15s 923ms
/s3/gcs/part 1/alter/zero copy encrypted/move to table OK 30s 738ms
/s3/gcs/part 1/alter/zero copy encrypted/order by OK 8s 553ms
/s3/gcs/part 1/alter/zero copy encrypted/projection OK 17s 595ms
/s3/gcs/part 1/alter/zero copy encrypted/replace OK 28s 336ms
/s3/gcs/part 1/alter/zero copy encrypted/sample by OK 8s 225ms
/s3/gcs/part 1/alter/zero copy encrypted/update delete OK 15s 790ms
/s3/gcs/part 2 OK 1h 23m
/s3/gcs/part 2/combinatoric table OK 40m 9s
/s3/gcs/part 2/combinatoric table/engine=MergeTree,replicated=True,n_cols=2000,n_tables=3,part_type=compact OK 43s 476ms
/s3/gcs/part 2/combinatoric table/engine=MergeTree,replicated=False,n_cols=500,n_tables=1,part_type=wide OK 5m 8s
/s3/gcs/part 2/combinatoric table/engine=ReplacingMergeTree,replicated=True,n_cols=500,n_tables=3,part_type=unspecified OK 39s 752ms
/s3/gcs/part 2/combinatoric table/engine=ReplacingMergeTree,replicated=False,n_cols=2000,n_tables=1,part_type=compact OK 7s 789ms
/s3/gcs/part 2/combinatoric table/engine=CollapsingMergeTree,replicated=True,n_cols=10,n_tables=1,part_type=wide OK 25s 352ms
/s3/gcs/part 2/combinatoric table/engine=CollapsingMergeTree,replicated=False,n_cols=2000,n_tables=3,part_type=unspecified OK 21s 198ms
/s3/gcs/part 2/combinatoric table/engine=VersionedCollapsingMergeTree,replicated=True,n_cols=2000,n_tables=3,part_type=wide XFail 4m 25s
/s3/gcs/part 2/combinatoric table/engine=VersionedCollapsingMergeTree,replicated=False,n_cols=10,n_tables=3,part_type=compact OK 11s 222ms
/s3/gcs/part 2/combinatoric table/engine=AggregatingMergeTree,replicated=True,n_cols=2000,n_tables=3,part_type=compact XError 8m 2s
/s3/gcs/part 2/combinatoric table/engine=AggregatingMergeTree,replicated=False,n_cols=500,n_tables=1,part_type=unspecified OK 7s 22ms
/s3/gcs/part 2/combinatoric table/engine=SummingMergeTree,replicated=True,n_cols=2000,n_tables=3,part_type=compact XError 8m 2s
/s3/gcs/part 2/combinatoric table/engine=SummingMergeTree,replicated=False,n_cols=500,n_tables=1,part_type=compact OK 7s 205ms
/s3/gcs/part 2/combinatoric table/engine=MergeTree,replicated=True,n_cols=10,n_tables=3,part_type=unspecified XError 5m 57s
/s3/gcs/part 2/combinatoric table/engine=ReplacingMergeTree,replicated=True,n_cols=10,n_tables=3,part_type=wide OK 1m 19s
/s3/gcs/part 2/combinatoric table/engine=CollapsingMergeTree,replicated=True,n_cols=500,n_tables=3,part_type=compact OK 42s 34ms
/s3/gcs/part 2/combinatoric table/engine=VersionedCollapsingMergeTree,replicated=True,n_cols=500,n_tables=1,part_type=unspecified OK 14s 990ms
/s3/gcs/part 2/combinatoric table/engine=AggregatingMergeTree,replicated=True,n_cols=10,n_tables=3,part_type=wide OK 1m 14s
/s3/gcs/part 2/combinatoric table/engine=SummingMergeTree,replicated=True,n_cols=10,n_tables=3,part_type=wide OK 1m 15s
/s3/gcs/part 2/combinatoric table/engine=SummingMergeTree,replicated=True,n_cols=10,n_tables=1,part_type=unspecified OK 12s 170ms
/s3/gcs/part 2/zero copy replication Skip 1ms
/s3/gcs/part 2/backup OK 3m 32s
/s3/gcs/part 2/backup/local and s3 disk OK 1m 10s
/s3/gcs/part 2/backup/local and s3 volumes OK 1m 6s
/s3/gcs/part 2/backup/s3 disk OK 1m 15s
/s3/gcs/part 2/orphans Skip 1ms
/s3/gcs/part 2/settings OK 30m 46s
/s3/gcs/part 2/settings/setting combinations OK 29m 51s
/s3/gcs/part 2/table function performance OK 8m 48s
/s3/gcs/part 2/table function performance/setup OK 845ms
/s3/gcs/part 2/table function performance/wildcard OK 8m 47s

Generated by TestFlows Open-Source Test Framework v2.0.250110.1002922