S3 Test Run Report

DateJul 28, 2025 9:48
Duration2h 11m
Framework TestFlows 2.0.250110.1002922

Artifacts

Test artifacts can be found at https://altinity-build-artifacts.s3.amazonaws.com/index.html#0/d32d0074004db61e346611c777e26532a456fe2f/regression/aarch64/with_analyzer/zookeeper/without_thread_fuzzer/s3/gcs/

Attributes

projectAltinity/ClickHouse
project.id159717931
packagehttps://s3.amazonaws.com/altinity-build-artifacts/25.3/d32d0074004db61e346611c777e26532a456fe2f/package_aarch64/clickhouse-common-static_25.3.6.10034.altinitystable_arm64.deb
version25.3.6.10034.altinitystable
user.namezvonand
repositoryhttps://github.com/Altinity/clickhouse-regression
commit.hash5723e20cbc49b347114c7b90c7316a44dafa5328
job.nameS3 (gcs)
job.retry1
job.urlhttps://github.com/Altinity/ClickHouse/actions/runs/16564472800
archaarch64
localTrue
clickhouse_versionNone
clickhouse_pathhttps://s3.amazonaws.com/altinity-build-artifacts/25.3/d32d0074004db61e346611c777e26532a456fe2f/package_aarch64/clickhouse-common-static_25.3.6.10034.altinitystable_arm64.deb
as_binaryFalse
base_osNone
keeper_pathNone
zookeeper_versionNone
use_keeperFalse
stressFalse
collect_service_logsTrue
thread_fuzzerFalse
with_analyzerTrue
reuse_envFalse
storages['gcs']
minio_uriSecret(name='minio_uri')
minio_root_userSecret(name='minio_root_user')
minio_root_passwordSecret(name='minio_root_password')
aws_s3_bucketSecret(name='aws_s3_bucket')
aws_s3_regionSecret(name='aws_s3_region')
aws_s3_key_idSecret(name='aws_s3_key_id')
aws_s3_access_keySecret(name='aws_s3_access_key')
gcs_uriSecret(name='gcs_uri')
gcs_key_idSecret(name='gcs_key_id')
gcs_key_secretSecret(name='gcs_key_secret')
azure_account_nameSecret(name='azure_account_name')
azure_storage_keySecret(name='azure_storage_key')
azure_containerSecret(name='azure_container')

Summary

90%OK
7.2%Known

Statistics

Units Skip OK Fail Error XFail XError Retried
Modules
1
1
Features
15
2
13
Scenarios
198
12
145
38
3
Checks
54
54
Examples
54
2
52
Steps
30473
30
30421
13
8
1

Known Fails

Test NameResultMessage
/s3/gcs/part 1/invalid table function/invalid pathXError 30s 51ms
https://github.com/ClickHouse/ClickHouse/issues/59084
ExpectTimeoutError
Traceback (most recent call last):
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/venv/lib/python3.12/site-packages/testflows/uexpect/uexpect.py", line 223, in read
    d = self.queue.get(timeout=timeleft)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/queue.py", line 179, in get
    raise Empty
_queue.Empty

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/venv/lib/python3.12/site-packages/testflows/uexpect/uexpect.py", line 187, in expect
    data = self.read(timeout=min(timeleft, 0.1), raise_exception=True)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/venv/lib/python3.12/site-packages/testflows/uexpect/uexpect.py", line 235, in read
    raise TimeoutError(timeout)
testflows.uexpect.uexpect.TimeoutError: Timeout 0.040s

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 813, in 
    regression()
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../helpers/argparser.py", line 171, in capture_cluster_args
    return func(self, cluster_args=cluster_args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../helpers/argparser.py", line 348, in capture_s3_args
    return func(self, s3_args=s3_args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 785, in regression
    Feature(test=gcs_regression)(
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 726, in gcs_regression
    Feature(test=load("s3.tests.table_function_invalid", "gcs"))(uri=uri)
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/table_function_invalid.py", line 411, in gcs
    outline(uri=uri)
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/table_function_invalid.py", line 395, in outline
    scenario()
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/table_function_invalid.py", line 115, in invalid_path
    insert_to_s3_function_invalid(
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/table_function_invalid.py", line 34, in insert_to_s3_function_invalid
    node.query(query, message=message, exitcode=exitcode, timeout=timeout)
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../helpers/cluster.py", line 1147, in query
    r = self.cluster.bash(self.name)(command, *args, **kwargs)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
testflows.uexpect.uexpect.ExpectTimeoutError: Timeout 30.000s for '(bash# )\|(\n)'
/s3/gcs/part 1/disk/cacheXFail 2ms
Under development for 22.8 and newer.
None
/s3/gcs/part 1/disk/cache defaultXFail 891us
Under development for 22.8 and newer.
None
/s3/gcs/part 1/disk/cache pathXFail 864us
Under development for 22.8 and newer.
None
/s3/gcs/part 1/disk/low cardinality offsetXFail 46s 208ms
https://github.com/ClickHouse/ClickHouse/pull/44875
AssertionError
Traceback (most recent call last):
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 813, in 
    regression()
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../helpers/argparser.py", line 171, in capture_cluster_args
    return func(self, cluster_args=cluster_args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../helpers/argparser.py", line 348, in capture_s3_args
    return func(self, s3_args=s3_args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 785, in regression
    Feature(test=gcs_regression)(
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 727, in gcs_regression
    Feature(test=load("s3.tests.disk", "gcs"))(
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/disk.py", line 2560, in gcs
    disk_tests(uri=uri, bucket_prefix=bucket_prefix)
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/disk.py", line 2538, in disk_tests
    scenario()
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/disk.py", line 2500, in low_cardinality_offset
    assert output == "23999\n", error()
           ^^^^^^^^^^^^^^^^^^^
AssertionError: Oops! Assertion failed

The following assertion was not satisfied
  assert output == "23999\n", error()

Assertion values
  assert output == "23999\n", error()
         ^ is '23999'
  assert output == "23999\n", error()
                ^ is = False

  assert output == "23999\n", error()
  ^ is False

Where
  File '/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/disk.py', line 2500 in 'low_cardinality_offset'

2492\|                      "1",
2493\|                  ),
2494\|                  (
2495\|                      "merge_tree_min_bytes_for_concurrent_read_for_remote_filesystem",
2496\|                      "1",
2497\|                  ),
2498\|              ],
2499\|          ).output
2500\|>         assert output == "23999\n", error()
2501\|  
2502\|  
2503\|  @TestFeature
/s3/gcs/part 1/disk/no restartXFail 1ms
https://github.com/ClickHouse/ClickHouse/issues/58924
None
/s3/gcs/part 1/invalid disk/cache path conflictXFail 2ms
Under development for 22.8 and newer.
None
/s3/gcs/part 2/combinatoric table/engine=VersionedCollapsingMergeTree,replicated=True,n_cols=2000,n_tables=3,part_type=wideXFail 4m 19s
Needs investigation, rows not appearing
AssertionError
Traceback (most recent call last):
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 813, in 
    regression()
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../helpers/argparser.py", line 171, in capture_cluster_args
    return func(self, cluster_args=cluster_args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../helpers/argparser.py", line 348, in capture_s3_args
    return func(self, s3_args=s3_args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 785, in regression
    Feature(test=gcs_regression)(
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 735, in gcs_regression
    Feature(test=load("s3.tests.combinatoric_table", "feature"))(uri=uri)
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/combinatoric_table.py", line 182, in feature
    Scenario(title, test=check_table_combination)(**table_config)
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/combinatoric_table.py", line 149, in check_table_combination
    retry(assert_row_count, timeout=60, delay=5)(
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/common.py", line 1754, in assert_row_count
    assert rows == actual_count, error()
           ^^^^^^^^^^^^^^^^^^^^
AssertionError: Oops! Assertion failed

The following assertion was not satisfied
  assert rows == actual_count, error()

Assertion values
  assert rows == actual_count, error()
         ^ is 1500
  assert rows == actual_count, error()
                 ^ is 0
  assert rows == actual_count, error()
              ^ is = False
  assert rows == actual_count, error()
  ^ is False

Where
  File '/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/common.py', line 1754 in 'assert_row_count'

1746\|  
1747\|  @TestStep(Then)
1748\|  def assert_row_count(self, node, table_name: str, rows: int = 1000000):
1749\|      """Assert that the number of rows in a table is as expected."""
1750\|      if node is None:
1751\|          node = current().context.node
1752\|  
1753\|      actual_count = get_row_count(node=node, table_name=table_name)
1754\|>     assert rows == actual_count, error()
1755\|  
1756\|  
1757\|  @TestStep(Then)
/s3/gcs/part 2/combinatoric table/engine=AggregatingMergeTree,replicated=True,n_cols=2000,n_tables=3,part_type=compactXError 8m 1s
Times out, needs investigation
ExpectTimeoutError
Traceback (most recent call last):
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/venv/lib/python3.12/site-packages/testflows/uexpect/uexpect.py", line 223, in read
    d = self.queue.get(timeout=timeleft)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/queue.py", line 179, in get
    raise Empty
_queue.Empty

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/venv/lib/python3.12/site-packages/testflows/uexpect/uexpect.py", line 187, in expect
    data = self.read(timeout=min(timeleft, 0.1), raise_exception=True)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/venv/lib/python3.12/site-packages/testflows/uexpect/uexpect.py", line 235, in read
    raise TimeoutError(timeout)
testflows.uexpect.uexpect.TimeoutError: Timeout 0.094s

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 813, in 
    regression()
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../helpers/argparser.py", line 171, in capture_cluster_args
    return func(self, cluster_args=cluster_args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../helpers/argparser.py", line 348, in capture_s3_args
    return func(self, s3_args=s3_args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 785, in regression
    Feature(test=gcs_regression)(
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 735, in gcs_regression
    Feature(test=load("s3.tests.combinatoric_table", "feature"))(uri=uri)
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/combinatoric_table.py", line 182, in feature
    Scenario(title, test=check_table_combination)(**table_config)
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/combinatoric_table.py", line 117, in check_table_combination
    table = create_test_table(
            ^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/combinatoric_table.py", line 81, in create_test_table
    yield create_table(
          ^^^^^^^^^^^^^
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../helpers/tables.py", line 482, in create_table
    node.query(
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../helpers/cluster.py", line 1123, in query
    r = self.cluster.bash(None)(command, *args, **kwargs)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
testflows.uexpect.uexpect.ExpectTimeoutError: Timeout 300.000s for '(bash# )\|(\n)'
/s3/gcs/part 2/combinatoric table/engine=SummingMergeTree,replicated=True,n_cols=2000,n_tables=3,part_type=compactXError 8m 1s
Times out, needs investigation
ExpectTimeoutError
Traceback (most recent call last):
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/venv/lib/python3.12/site-packages/testflows/uexpect/uexpect.py", line 223, in read
    d = self.queue.get(timeout=timeleft)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/queue.py", line 179, in get
    raise Empty
_queue.Empty

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/venv/lib/python3.12/site-packages/testflows/uexpect/uexpect.py", line 187, in expect
    data = self.read(timeout=min(timeleft, 0.1), raise_exception=True)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/venv/lib/python3.12/site-packages/testflows/uexpect/uexpect.py", line 235, in read
    raise TimeoutError(timeout)
testflows.uexpect.uexpect.TimeoutError: Timeout 0.084s

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 813, in 
    regression()
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../helpers/argparser.py", line 171, in capture_cluster_args
    return func(self, cluster_args=cluster_args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../helpers/argparser.py", line 348, in capture_s3_args
    return func(self, s3_args=s3_args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 785, in regression
    Feature(test=gcs_regression)(
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 735, in gcs_regression
    Feature(test=load("s3.tests.combinatoric_table", "feature"))(uri=uri)
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/combinatoric_table.py", line 182, in feature
    Scenario(title, test=check_table_combination)(**table_config)
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/combinatoric_table.py", line 117, in check_table_combination
    table = create_test_table(
            ^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/combinatoric_table.py", line 81, in create_test_table
    yield create_table(
          ^^^^^^^^^^^^^
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../helpers/tables.py", line 482, in create_table
    node.query(
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../helpers/cluster.py", line 1123, in query
    r = self.cluster.bash(None)(command, *args, **kwargs)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
testflows.uexpect.uexpect.ExpectTimeoutError: Timeout 300.000s for '(bash# )\|(\n)'

Results

Test Name Result Duration
/s3 OK 2h 11m
/s3/gcs OK 2h 11m
/s3/gcs/part 1 OK 1h 3m
/s3/gcs/part 1/sanity OK 52s 223ms
/s3/gcs/part 1/sanity/sanity OK 8s 322ms
/s3/gcs/part 1/table function OK 7m 23s
/s3/gcs/part 1/table function/auto OK 1m 42s
/s3/gcs/part 1/table function/compression OK 1m 43s
/s3/gcs/part 1/table function/credentials OK 913ms
/s3/gcs/part 1/table function/credentials s3Cluster OK 10s 512ms
/s3/gcs/part 1/table function/data format OK 1m 1s
/s3/gcs/part 1/table function/measure file size Skip 1ms
/s3/gcs/part 1/table function/measure file size s3Cluster Skip 813us
/s3/gcs/part 1/table function/multipart OK 4s 92ms
/s3/gcs/part 1/table function/multiple columns OK 1s 369ms
/s3/gcs/part 1/table function/partition OK 1s 352ms
/s3/gcs/part 1/table function/partition s3Cluster OK 12s 242ms
/s3/gcs/part 1/table function/remote host filter OK 44s 279ms
/s3/gcs/part 1/table function/syntax OK 1s 56ms
/s3/gcs/part 1/table function/syntax s3Cluster OK 10s 421ms
/s3/gcs/part 1/table function/wildcard OK 1m 29s
/s3/gcs/part 1/invalid table function OK 33s 823ms
/s3/gcs/part 1/invalid table function/empty path OK 273ms
/s3/gcs/part 1/invalid table function/empty structure OK 272ms
/s3/gcs/part 1/invalid table function/invalid bucket OK 312ms
/s3/gcs/part 1/invalid table function/invalid compression OK 321ms
/s3/gcs/part 1/invalid table function/invalid credentials OK 538ms
/s3/gcs/part 1/invalid table function/invalid format OK 534ms
/s3/gcs/part 1/invalid table function/invalid path XError 30s 51ms
/s3/gcs/part 1/invalid table function/invalid region Skip 704us
/s3/gcs/part 1/invalid table function/invalid structure OK 463ms
/s3/gcs/part 1/invalid table function/invalid wildcard OK 1s 34ms
/s3/gcs/part 1/disk OK 34m 40s
/s3/gcs/part 1/disk/access OK 44s 322ms
/s3/gcs/part 1/disk/access skip check OK 45s 508ms
/s3/gcs/part 1/disk/add storage OK 1m 34s
/s3/gcs/part 1/disk/alter move OK 54s 153ms
/s3/gcs/part 1/disk/alter on cluster modify ttl OK 1m 38s
/s3/gcs/part 1/disk/cache XFail 2ms
/s3/gcs/part 1/disk/cache default XFail 891us
/s3/gcs/part 1/disk/cache path XFail 864us
/s3/gcs/part 1/disk/compact parts OK 44s 80ms
/s3/gcs/part 1/disk/config over restart OK 1m 8s
/s3/gcs/part 1/disk/default move factor OK 58s 287ms
/s3/gcs/part 1/disk/delete OK 3m 6s
/s3/gcs/part 1/disk/download appropriate disk OK 1m 13s
/s3/gcs/part 1/disk/drop sync OK 48s 126ms
/s3/gcs/part 1/disk/environment credentials Skip 1ms
/s3/gcs/part 1/disk/exports OK 48s 105ms
/s3/gcs/part 1/disk/generic url Skip 1ms
/s3/gcs/part 1/disk/imports OK 48s 245ms
/s3/gcs/part 1/disk/log OK 2m 37s
/s3/gcs/part 1/disk/low cardinality offset XFail 46s 208ms
/s3/gcs/part 1/disk/max single part upload size syntax OK 46s 143ms
/s3/gcs/part 1/disk/mergetree OK 3m 22s
/s3/gcs/part 1/disk/mergetree collapsing OK 52s 576ms
/s3/gcs/part 1/disk/mergetree versionedcollapsing OK 54s 221ms
/s3/gcs/part 1/disk/metadata OK 51s 935ms
/s3/gcs/part 1/disk/min bytes for seek syntax OK 47s 920ms
/s3/gcs/part 1/disk/multiple storage OK 53s 213ms
/s3/gcs/part 1/disk/multiple storage query OK 52s 390ms
/s3/gcs/part 1/disk/no restart XFail 1ms
/s3/gcs/part 1/disk/perform ttl move on insert OK 1m 33s
/s3/gcs/part 1/disk/perform ttl move on insert default OK 54s 475ms
/s3/gcs/part 1/disk/performance ttl move OK 1m 8s
/s3/gcs/part 1/disk/remote host filter OK 1m 34s
/s3/gcs/part 1/disk/specific url Skip 1ms
/s3/gcs/part 1/disk/syntax OK 49s 987ms
/s3/gcs/part 1/disk/wide parts OK 42s 841ms
/s3/gcs/part 1/invalid disk OK 3m 43s
/s3/gcs/part 1/invalid disk/access default OK 10s 764ms
/s3/gcs/part 1/invalid disk/access failed OK 10s 760ms
/s3/gcs/part 1/invalid disk/access failed skip check OK 44s 298ms
/s3/gcs/part 1/invalid disk/cache path conflict XFail 2ms
/s3/gcs/part 1/invalid disk/empty endpoint OK 10s 758ms
/s3/gcs/part 1/invalid disk/invalid endpoint OK 2m 5s
/s3/gcs/part 1/invalid disk/invalid type OK 21s 607ms
/s3/gcs/part 1/alter OK 16m 9s
/s3/gcs/part 1/alter/normal OK 3m 56s
/s3/gcs/part 1/alter/normal/attach from OK 14s 120ms
/s3/gcs/part 1/alter/normal/columns OK 12s 1ms
/s3/gcs/part 1/alter/normal/detach OK 23s 822ms
/s3/gcs/part 1/alter/normal/drop OK 52s 143ms
/s3/gcs/part 1/alter/normal/fetch OK 31s 445ms
/s3/gcs/part 1/alter/normal/freeze OK 21s 38ms
/s3/gcs/part 1/alter/normal/index OK 10s 669ms
/s3/gcs/part 1/alter/normal/move to table OK 17s 626ms
/s3/gcs/part 1/alter/normal/order by OK 7s 541ms
/s3/gcs/part 1/alter/normal/projection OK 11s 720ms
/s3/gcs/part 1/alter/normal/replace OK 17s 534ms
/s3/gcs/part 1/alter/normal/sample by OK 7s 527ms
/s3/gcs/part 1/alter/normal/update delete OK 8s 874ms
/s3/gcs/part 1/alter/encrypted OK 5m 31s
/s3/gcs/part 1/alter/encrypted/attach from OK 17s 925ms
/s3/gcs/part 1/alter/encrypted/columns OK 18s 727ms
/s3/gcs/part 1/alter/encrypted/detach OK 37s 305ms
/s3/gcs/part 1/alter/encrypted/drop OK 1m 11s
/s3/gcs/part 1/alter/encrypted/fetch OK 38s 643ms
/s3/gcs/part 1/alter/encrypted/freeze OK 31s 361ms
/s3/gcs/part 1/alter/encrypted/index OK 15s 820ms
/s3/gcs/part 1/alter/encrypted/move to table OK 29s 474ms
/s3/gcs/part 1/alter/encrypted/order by OK 8s 633ms
/s3/gcs/part 1/alter/encrypted/projection OK 18s 162ms
/s3/gcs/part 1/alter/encrypted/replace OK 20s 348ms
/s3/gcs/part 1/alter/encrypted/sample by OK 8s 376ms
/s3/gcs/part 1/alter/encrypted/update delete OK 15s 545ms
/s3/gcs/part 1/alter/zero copy OK 2m 11s
/s3/gcs/part 1/alter/zero copy/attach from OK 12s 535ms
/s3/gcs/part 1/alter/zero copy/columns OK 11s 261ms
/s3/gcs/part 1/alter/zero copy/detach Skip 1ms
/s3/gcs/part 1/alter/zero copy/drop Skip 1ms
/s3/gcs/part 1/alter/zero copy/fetch Skip 1ms
/s3/gcs/part 1/alter/zero copy/freeze OK 23s 225ms
/s3/gcs/part 1/alter/zero copy/index OK 10s 817ms
/s3/gcs/part 1/alter/zero copy/move to table OK 17s 988ms
/s3/gcs/part 1/alter/zero copy/order by OK 7s 566ms
/s3/gcs/part 1/alter/zero copy/projection OK 12s 810ms
/s3/gcs/part 1/alter/zero copy/replace OK 18s 398ms
/s3/gcs/part 1/alter/zero copy/sample by OK 7s 589ms
/s3/gcs/part 1/alter/zero copy/update delete OK 8s 846ms
/s3/gcs/part 1/alter/zero copy encrypted OK 3m 2s
/s3/gcs/part 1/alter/zero copy encrypted/attach from OK 16s 730ms
/s3/gcs/part 1/alter/zero copy encrypted/columns OK 19s 289ms
/s3/gcs/part 1/alter/zero copy encrypted/detach Skip 1ms
/s3/gcs/part 1/alter/zero copy encrypted/drop Skip 1ms
/s3/gcs/part 1/alter/zero copy encrypted/fetch Skip 1ms
/s3/gcs/part 1/alter/zero copy encrypted/freeze OK 30s 344ms
/s3/gcs/part 1/alter/zero copy encrypted/index OK 15s 814ms
/s3/gcs/part 1/alter/zero copy encrypted/move to table OK 29s 885ms
/s3/gcs/part 1/alter/zero copy encrypted/order by OK 8s 359ms
/s3/gcs/part 1/alter/zero copy encrypted/projection OK 18s 84ms
/s3/gcs/part 1/alter/zero copy encrypted/replace OK 20s 270ms
/s3/gcs/part 1/alter/zero copy encrypted/sample by OK 8s 516ms
/s3/gcs/part 1/alter/zero copy encrypted/update delete OK 15s 670ms
/s3/gcs/part 2 OK 1h 5m
/s3/gcs/part 2/combinatoric table OK 37m 45s
/s3/gcs/part 2/combinatoric table/engine=MergeTree,replicated=True,n_cols=2000,n_tables=3,part_type=compact OK 35s 816ms
/s3/gcs/part 2/combinatoric table/engine=MergeTree,replicated=False,n_cols=500,n_tables=1,part_type=wide OK 4m 35s
/s3/gcs/part 2/combinatoric table/engine=ReplacingMergeTree,replicated=True,n_cols=500,n_tables=3,part_type=unspecified OK 34s 363ms
/s3/gcs/part 2/combinatoric table/engine=ReplacingMergeTree,replicated=False,n_cols=2000,n_tables=1,part_type=compact OK 4s 545ms
/s3/gcs/part 2/combinatoric table/engine=CollapsingMergeTree,replicated=True,n_cols=10,n_tables=1,part_type=wide OK 25s 490ms
/s3/gcs/part 2/combinatoric table/engine=CollapsingMergeTree,replicated=False,n_cols=2000,n_tables=3,part_type=unspecified OK 13s 623ms
/s3/gcs/part 2/combinatoric table/engine=VersionedCollapsingMergeTree,replicated=True,n_cols=2000,n_tables=3,part_type=wide XFail 4m 19s
/s3/gcs/part 2/combinatoric table/engine=VersionedCollapsingMergeTree,replicated=False,n_cols=10,n_tables=3,part_type=compact OK 11s 851ms
/s3/gcs/part 2/combinatoric table/engine=AggregatingMergeTree,replicated=True,n_cols=2000,n_tables=3,part_type=compact XError 8m 1s
/s3/gcs/part 2/combinatoric table/engine=AggregatingMergeTree,replicated=False,n_cols=500,n_tables=1,part_type=unspecified OK 5s 276ms
/s3/gcs/part 2/combinatoric table/engine=SummingMergeTree,replicated=True,n_cols=2000,n_tables=3,part_type=compact XError 8m 1s
/s3/gcs/part 2/combinatoric table/engine=SummingMergeTree,replicated=False,n_cols=500,n_tables=1,part_type=compact OK 4s 352ms
/s3/gcs/part 2/combinatoric table/engine=MergeTree,replicated=True,n_cols=10,n_tables=3,part_type=unspecified OK 5m 9s
/s3/gcs/part 2/combinatoric table/engine=ReplacingMergeTree,replicated=True,n_cols=10,n_tables=3,part_type=wide OK 1m 14s
/s3/gcs/part 2/combinatoric table/engine=CollapsingMergeTree,replicated=True,n_cols=500,n_tables=3,part_type=compact OK 34s 42ms
/s3/gcs/part 2/combinatoric table/engine=VersionedCollapsingMergeTree,replicated=True,n_cols=500,n_tables=1,part_type=unspecified OK 10s 509ms
/s3/gcs/part 2/combinatoric table/engine=AggregatingMergeTree,replicated=True,n_cols=10,n_tables=3,part_type=wide OK 1m 14s
/s3/gcs/part 2/combinatoric table/engine=SummingMergeTree,replicated=True,n_cols=10,n_tables=3,part_type=wide OK 1m 13s
/s3/gcs/part 2/combinatoric table/engine=SummingMergeTree,replicated=True,n_cols=10,n_tables=1,part_type=unspecified OK 12s 193ms
/s3/gcs/part 2/zero copy replication Skip 1ms
/s3/gcs/part 2/backup OK 3m 6s
/s3/gcs/part 2/backup/local and s3 disk OK 1m 0s
/s3/gcs/part 2/backup/local and s3 volumes OK 58s 149ms
/s3/gcs/part 2/backup/s3 disk OK 1m 6s
/s3/gcs/part 2/orphans Skip 1ms
/s3/gcs/part 2/settings OK 17m 9s
/s3/gcs/part 2/settings/setting combinations OK 16m 26s
/s3/gcs/part 2/table function performance OK 7m 43s
/s3/gcs/part 2/table function performance/setup OK 663ms
/s3/gcs/part 2/table function performance/wildcard OK 7m 42s

Generated by TestFlows Open-Source Test Framework v2.0.250110.1002922