Parquet Test Run Report

DateFeb 18, 2026 16:45
Duration1h 11m
Framework TestFlows 2.0.250110.1002922

Artifacts

Test artifacts can be found at https://altinity-build-artifacts.s3.amazonaws.com/index.html#REFs/v25.8.16.10001.altinitystable/50d19e9216a5e7d6b48ee263986e7ccae8cb2f18/regression/

Attributes

projectAltinity/ClickHouse
project.id159717931
user.namestrtgbb
version25.8.16.10001.altinitystable
packagehttps://altinity-build-artifacts.s3.amazonaws.com/REFs/v25.8.16.10001.altinitystable/50d19e9216a5e7d6b48ee263986e7ccae8cb2f18/build_arm_binary/clickhouse
repositoryhttps://github.com/Altinity/clickhouse-regression
commit.hash979bb27171f92724bcd8f086989ba623f2e03fdc
job.namesuite
job.retry1
job.urlhttps://github.com/Altinity/ClickHouse/actions/runs/22144652887
archaarch64
localTrue
clickhouse_versionNone
clickhouse_pathhttps://altinity-build-artifacts.s3.amazonaws.com/REFs/v25.8.16.10001.altinitystable/50d19e9216a5e7d6b48ee263986e7ccae8cb2f18/build_arm_binary/clickhouse
as_binaryFalse
base_osNone
keeper_pathNone
zookeeper_versionNone
use_keeperFalse
stressFalse
collect_service_logsTrue
thread_fuzzerFalse
with_analyzerTrue
reuse_envFalse
cicdTrue
storagesNone
minio_uriSecret(name='minio_uri')
minio_root_userSecret(name='minio_root_user')
minio_root_passwordSecret(name='minio_root_password')
aws_s3_bucketNone
aws_s3_regionSecret(name='aws_s3_region')
aws_s3_key_idSecret(name='aws_s3_key_id')
aws_s3_access_keySecret(name='aws_s3_access_key')
gcs_uriNone
gcs_key_idNone
gcs_key_secretNone
azure_account_nameNone
azure_storage_keyNone
azure_containerNone
stress_bloomFalse

Summary

100%OK
<1%Known

Statistics

Units Skip OK Fail XFail
Modules
1
1
Suites
2
2
Features
39
2
36
1
Scenarios
223
18
200
5
Checks
59674
59674
Examples
12
12
Steps
379425
44
377224
20
2137

Known Fails

Test NameResultMessage
/parquet/postgresql/compression type/=NONE /postgresql engine to parquet file to postgresql engineXFail 7s 336ms
This fails because of the difference in snapshot values. We used to capture the datetime value `0` be converted as 2106-02-07 06:28:16 instead of the correct 1970-01-01 01:00:00. But when steps are repeated manually, we can not reproduce it
AssertionError
Traceback (most recent call last):
  File "/usr/lib/python3.12/threading.py", line 1030, in _bootstrap
    self._bootstrap_inner()
  File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner
    self.run()
  File "/usr/lib/python3.12/threading.py", line 1010, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/lib/python3.12/concurrent/futures/thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py", line 834, in execute_query_step
    execute_query(
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py", line 904, in execute_query
    assert that(snapshot_result), error()
           ^^^^^^^^^^^^^^^^^^^^^
AssertionError: Oops! Assertion failed

The following assertion was not satisfied
  assert that(snapshot_result), error()

Assertion values
  assert that(snapshot_result), error()
         ^ is = SnapshotError(
    filename=/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/snapshots/common.py.postgresql engine to parquet file to postgresql engine.snapshot
    name=_parquet_postgresql_compression_type__NONE__postgresql_engine_to_parquet_file_to_postgresql_engine_I_check_the_data_on_the_table_nullable_datetime_
    snapshot_value="""

        {"nullable_datetime_":"2106-02-07 06:28:15","toTypeName(nullable_datetime_)":"Nullable(DateTime)"}
        {"nullable_datetime_":"2106-02-07 06:28:16","toTypeName(nullable_datetime_)":"Nullable(DateTime)"}
        {"nullable_datetime_":null,"toTypeName(nullable_datetime_)":"Nullable(DateTime)"}
        {"nullable_datetime_":"2014-04-10 09:39:46","toTypeName(nullable_datetime_)":"Nullable(DateTime)"}
        {"nullable_datetime_":"2004-04-05 13:18:38","toTypeName(nullable_datetime_)":"Nullable(DateTime)"}
        {"nullable_datetime_":"2006-08-10 21:14:26","toTypeName(nullable_datetime_)":"Nullable(DateTime)"}
        {"nullable_datetime_":"2006-01-25 20:44:12","toTypeName(nullable_datetime_)":"Nullable(DateTime)"}
        {"nullable_datetime_":"2009-05-30 23:43:18","toTypeName(nullable_datetime_)":"Nullable(DateTime)"}
        {"nullable_datetime_":"2015-12-28 10:07:33","toTypeName(nullable_datetime_)":"Nullable(DateTime)"}
        {"nullable_datetime_":"2017-06-27 03:09:51","toTypeName(nullable_datetime_)":"Nullable(DateTime)"}
    """,
    actual_value="""

        {"nullable_datetime_":"2106-02-07 06:28:15","toTypeName(nullable_datetime_)":"Nullable(DateTime)"}
        {"nullable_datetime_":"1970-01-01 01:00:00","toTypeName(nullable_datetime_)":"Nullable(DateTime)"}
        {"nullable_datetime_":null,"toTypeName(nullable_datetime_)":"Nullable(DateTime)"}
        {"nullable_datetime_":"2014-04-10 09:39:46","toTypeName(nullable_datetime_)":"Nullable(DateTime)"}
        {"nullable_datetime_":"2004-04-05 13:18:38","toTypeName(nullable_datetime_)":"Nullable(DateTime)"}
        {"nullable_datetime_":"2006-08-10 21:14:26","toTypeName(nullable_datetime_)":"Nullable(DateTime)"}
        {"nullable_datetime_":"2006-01-25 20:44:12","toTypeName(nullable_datetime_)":"Nullable(DateTime)"}
        {"nullable_datetime_":"2009-05-30 23:43:18","toTypeName(nullable_datetime_)":"Nullable(DateTime)"}
        {"nullable_datetime_":"2015-12-28 10:07:33","toTypeName(nullable_datetime_)":"Nullable(DateTime)"}
        {"nullable_datetime_":"2017-06-27 03:09:51","toTypeName(nullable_datetime_)":"Nullable(DateTime)"}
    """,
    diff="""
        --- /home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/snapshots/common.py.postgresql engine to parquet file to postgresql engine.snapshot
        +++ 
        @@ -1,6 +1,6 @@

         {"nullable_datetime_":"2106-02-07 06:28:15","toTypeName(nullable_datetime_)":"Nullable(DateTime)"}
        -{"nullable_datetime_":"2106-02-07 06:28:16","toTypeName(nullable_datetime_)":"Nullable(DateTime)"}
        +{"nullable_datetime_":"1970-01-01 01:00:00","toTypeName(nullable_datetime_)":"Nullable(DateTime)"}
         {"nullable_datetime_":null,"toTypeName(nullable_datetime_)":"Nullable(DateTime)"}
         {"nullable_datetime_":"2014-04-10 09:39:46","toTypeName(nullable_datetime_)":"Nullable(DateTime)"}
         {"nullable_datetime_":"2004-04-05 13:18:38","toTypeName(nullable_datetime_)":"Nullable(DateTime)"}
    """)
  assert that(snapshot_result), error()
  ^ is False

Where
  File '/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py', line 904 in 'execute_query'

896\|                  with values() as that:
897\|                      snapshot_result = snapshot(
898\|                          "\n" + r.output.strip() + "\n",
899\|                          id=snapshot_id,
900\|                          name=snapshot_name,
901\|                          encoder=str,
902\|                          mode=snapshot.CHECK,
903\|                      )
904\|>                     assert that(snapshot_result), error()
/parquet/postgresql/compression type/=GZIP /postgresql engine to parquet file to postgresql engineXFail 7s 236ms
This fails because of the difference in snapshot values. We used to capture the datetime value `0` be converted as 2106-02-07 06:28:16 instead of the correct 1970-01-01 01:00:00. But when steps are repeated manually, we can not reproduce it
AssertionError
Traceback (most recent call last):
  File "/usr/lib/python3.12/threading.py", line 1030, in _bootstrap
    self._bootstrap_inner()
  File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner
    self.run()
  File "/usr/lib/python3.12/threading.py", line 1010, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/lib/python3.12/concurrent/futures/thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py", line 834, in execute_query_step
    execute_query(
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py", line 904, in execute_query
    assert that(snapshot_result), error()
           ^^^^^^^^^^^^^^^^^^^^^
AssertionError: Oops! Assertion failed

The following assertion was not satisfied
  assert that(snapshot_result), error()

Assertion values
  assert that(snapshot_result), error()
         ^ is = SnapshotError(
    filename=/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/snapshots/common.py.postgresql engine to parquet file to postgresql engine.snapshot
    name=_parquet_postgresql_compression_type__GZIP__postgresql_engine_to_parquet_file_to_postgresql_engine_I_check_the_data_on_the_table_nullable_datetime_
    snapshot_value="""

        {"nullable_datetime_":"2106-02-07 06:28:15","toTypeName(nullable_datetime_)":"Nullable(DateTime)"}
        {"nullable_datetime_":"2106-02-07 06:28:16","toTypeName(nullable_datetime_)":"Nullable(DateTime)"}
        {"nullable_datetime_":null,"toTypeName(nullable_datetime_)":"Nullable(DateTime)"}
        {"nullable_datetime_":"2014-04-10 09:39:46","toTypeName(nullable_datetime_)":"Nullable(DateTime)"}
        {"nullable_datetime_":"2004-04-05 13:18:38","toTypeName(nullable_datetime_)":"Nullable(DateTime)"}
        {"nullable_datetime_":"2006-08-10 21:14:26","toTypeName(nullable_datetime_)":"Nullable(DateTime)"}
        {"nullable_datetime_":"2006-01-25 20:44:12","toTypeName(nullable_datetime_)":"Nullable(DateTime)"}
        {"nullable_datetime_":"2009-05-30 23:43:18","toTypeName(nullable_datetime_)":"Nullable(DateTime)"}
        {"nullable_datetime_":"2015-12-28 10:07:33","toTypeName(nullable_datetime_)":"Nullable(DateTime)"}
        {"nullable_datetime_":"2017-06-27 03:09:51","toTypeName(nullable_datetime_)":"Nullable(DateTime)"}
    """,
    actual_value="""

        {"nullable_datetime_":"2106-02-07 06:28:15","toTypeName(nullable_datetime_)":"Nullable(DateTime)"}
        {"nullable_datetime_":"1970-01-01 01:00:00","toTypeName(nullable_datetime_)":"Nullable(DateTime)"}
        {"nullable_datetime_":null,"toTypeName(nullable_datetime_)":"Nullable(DateTime)"}
        {"nullable_datetime_":"2014-04-10 09:39:46","toTypeName(nullable_datetime_)":"Nullable(DateTime)"}
        {"nullable_datetime_":"2004-04-05 13:18:38","toTypeName(nullable_datetime_)":"Nullable(DateTime)"}
        {"nullable_datetime_":"2006-08-10 21:14:26","toTypeName(nullable_datetime_)":"Nullable(DateTime)"}
        {"nullable_datetime_":"2006-01-25 20:44:12","toTypeName(nullable_datetime_)":"Nullable(DateTime)"}
        {"nullable_datetime_":"2009-05-30 23:43:18","toTypeName(nullable_datetime_)":"Nullable(DateTime)"}
        {"nullable_datetime_":"2015-12-28 10:07:33","toTypeName(nullable_datetime_)":"Nullable(DateTime)"}
        {"nullable_datetime_":"2017-06-27 03:09:51","toTypeName(nullable_datetime_)":"Nullable(DateTime)"}
    """,
    diff="""
        --- /home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/snapshots/common.py.postgresql engine to parquet file to postgresql engine.snapshot
        +++ 
        @@ -1,6 +1,6 @@

         {"nullable_datetime_":"2106-02-07 06:28:15","toTypeName(nullable_datetime_)":"Nullable(DateTime)"}
        -{"nullable_datetime_":"2106-02-07 06:28:16","toTypeName(nullable_datetime_)":"Nullable(DateTime)"}
        +{"nullable_datetime_":"1970-01-01 01:00:00","toTypeName(nullable_datetime_)":"Nullable(DateTime)"}
         {"nullable_datetime_":null,"toTypeName(nullable_datetime_)":"Nullable(DateTime)"}
         {"nullable_datetime_":"2014-04-10 09:39:46","toTypeName(nullable_datetime_)":"Nullable(DateTime)"}
         {"nullable_datetime_":"2004-04-05 13:18:38","toTypeName(nullable_datetime_)":"Nullable(DateTime)"}
    """)
  assert that(snapshot_result), error()
  ^ is False

Where
  File '/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py', line 904 in 'execute_query'

896\|                  with values() as that:
897\|                      snapshot_result = snapshot(
898\|                          "\n" + r.output.strip() + "\n",
899\|                          id=snapshot_id,
900\|                          name=snapshot_name,
901\|                          encoder=str,
902\|                          mode=snapshot.CHECK,
903\|                      )
904\|>                     assert that(snapshot_result), error()
/parquet/postgresql/compression type/=LZ4 /postgresql engine to parquet file to postgresql engineXFail 7s 239ms
This fails because of the difference in snapshot values. We used to capture the datetime value `0` be converted as 2106-02-07 06:28:16 instead of the correct 1970-01-01 01:00:00. But when steps are repeated manually, we can not reproduce it
AssertionError
Traceback (most recent call last):
  File "/usr/lib/python3.12/threading.py", line 1030, in _bootstrap
    self._bootstrap_inner()
  File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner
    self.run()
  File "/usr/lib/python3.12/threading.py", line 1010, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/lib/python3.12/concurrent/futures/thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py", line 834, in execute_query_step
    execute_query(
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py", line 904, in execute_query
    assert that(snapshot_result), error()
           ^^^^^^^^^^^^^^^^^^^^^
AssertionError: Oops! Assertion failed

The following assertion was not satisfied
  assert that(snapshot_result), error()

Assertion values
  assert that(snapshot_result), error()
         ^ is = SnapshotError(
    filename=/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/snapshots/common.py.postgresql engine to parquet file to postgresql engine.snapshot
    name=_parquet_postgresql_compression_type__LZ4__postgresql_engine_to_parquet_file_to_postgresql_engine_I_check_the_data_on_the_table_nullable_datetime_
    snapshot_value="""

        {"nullable_datetime_":"2106-02-07 06:28:15","toTypeName(nullable_datetime_)":"Nullable(DateTime)"}
        {"nullable_datetime_":"2106-02-07 06:28:16","toTypeName(nullable_datetime_)":"Nullable(DateTime)"}
        {"nullable_datetime_":null,"toTypeName(nullable_datetime_)":"Nullable(DateTime)"}
        {"nullable_datetime_":"2014-04-10 09:39:46","toTypeName(nullable_datetime_)":"Nullable(DateTime)"}
        {"nullable_datetime_":"2004-04-05 13:18:38","toTypeName(nullable_datetime_)":"Nullable(DateTime)"}
        {"nullable_datetime_":"2006-08-10 21:14:26","toTypeName(nullable_datetime_)":"Nullable(DateTime)"}
        {"nullable_datetime_":"2006-01-25 20:44:12","toTypeName(nullable_datetime_)":"Nullable(DateTime)"}
        {"nullable_datetime_":"2009-05-30 23:43:18","toTypeName(nullable_datetime_)":"Nullable(DateTime)"}
        {"nullable_datetime_":"2015-12-28 10:07:33","toTypeName(nullable_datetime_)":"Nullable(DateTime)"}
        {"nullable_datetime_":"2017-06-27 03:09:51","toTypeName(nullable_datetime_)":"Nullable(DateTime)"}
    """,
    actual_value="""

        {"nullable_datetime_":"2106-02-07 06:28:15","toTypeName(nullable_datetime_)":"Nullable(DateTime)"}
        {"nullable_datetime_":"1970-01-01 01:00:00","toTypeName(nullable_datetime_)":"Nullable(DateTime)"}
        {"nullable_datetime_":null,"toTypeName(nullable_datetime_)":"Nullable(DateTime)"}
        {"nullable_datetime_":"2014-04-10 09:39:46","toTypeName(nullable_datetime_)":"Nullable(DateTime)"}
        {"nullable_datetime_":"2004-04-05 13:18:38","toTypeName(nullable_datetime_)":"Nullable(DateTime)"}
        {"nullable_datetime_":"2006-08-10 21:14:26","toTypeName(nullable_datetime_)":"Nullable(DateTime)"}
        {"nullable_datetime_":"2006-01-25 20:44:12","toTypeName(nullable_datetime_)":"Nullable(DateTime)"}
        {"nullable_datetime_":"2009-05-30 23:43:18","toTypeName(nullable_datetime_)":"Nullable(DateTime)"}
        {"nullable_datetime_":"2015-12-28 10:07:33","toTypeName(nullable_datetime_)":"Nullable(DateTime)"}
        {"nullable_datetime_":"2017-06-27 03:09:51","toTypeName(nullable_datetime_)":"Nullable(DateTime)"}
    """,
    diff="""
        --- /home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/snapshots/common.py.postgresql engine to parquet file to postgresql engine.snapshot
        +++ 
        @@ -1,6 +1,6 @@

         {"nullable_datetime_":"2106-02-07 06:28:15","toTypeName(nullable_datetime_)":"Nullable(DateTime)"}
        -{"nullable_datetime_":"2106-02-07 06:28:16","toTypeName(nullable_datetime_)":"Nullable(DateTime)"}
        +{"nullable_datetime_":"1970-01-01 01:00:00","toTypeName(nullable_datetime_)":"Nullable(DateTime)"}
         {"nullable_datetime_":null,"toTypeName(nullable_datetime_)":"Nullable(DateTime)"}
         {"nullable_datetime_":"2014-04-10 09:39:46","toTypeName(nullable_datetime_)":"Nullable(DateTime)"}
         {"nullable_datetime_":"2004-04-05 13:18:38","toTypeName(nullable_datetime_)":"Nullable(DateTime)"}
    """)
  assert that(snapshot_result), error()
  ^ is False

Where
  File '/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py', line 904 in 'execute_query'

896\|                  with values() as that:
897\|                      snapshot_result = snapshot(
898\|                          "\n" + r.output.strip() + "\n",
899\|                          id=snapshot_id,
900\|                          name=snapshot_name,
901\|                          encoder=str,
902\|                          mode=snapshot.CHECK,
903\|                      )
904\|>                     assert that(snapshot_result), error()
/parquet/chunked arrayXFail 14s 448ms
Not supported
AssertionError
Traceback (most recent call last):
  File "/usr/lib/python3.12/threading.py", line 1030, in _bootstrap
    self._bootstrap_inner()
  File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner
    self.run()
  File "/usr/lib/python3.12/threading.py", line 1010, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/lib/python3.12/concurrent/futures/thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/chunked_array.py", line 30, in feature
    node.query(
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../helpers/cluster.py", line 1163, in query
    assert False, error(r.output)
           ^^^^^
AssertionError: Oops! Assertion failed

The following assertion was not satisfied
  assert False, error(r.output)

Description
  Error on processing query: Code: 33. DB::Exception: Error while reading Parquet data: NotImplemented: Nested data conversions not implemented for chunked array outputs: (in file/uri /var/lib/clickhouse/user_files/chunked_array_test_file.parquet): While executing ParquetBlockInputFormat: While executing File: data for INSERT was parsed from file. (CANNOT_READ_ALL_DATA) (version 25.8.16.10001.altinitystable (altinity build))
(query: INSERT INTO table_a81ffb2a_0cf0_11f1_850f_9200073c9f1a FROM INFILE '/var/lib/clickhouse/user_files/chunked_array_test_file.parquet' FORMAT Parquet
)

Assertion values
  assert False, error(r.output)
  ^ is False

Where
  File '/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../helpers/cluster.py', line 1163 in 'query'

1155\|                  assert message in r.output, error(r.output)
1156\|  
1157\|          if not ignore_exception:
1158\|              if message is None or "Exception:" not in message:
1159\|                  with Then("check if output has exception") if steps else NullStep():
1160\|                      if "Exception:" in r.output:
1161\|                          if raise_on_exception:
1162\|                              raise QueryRuntimeException(r.output)
1163\|>                         assert False, error(r.output)
1164\|  
1165\|          return r
1166\|
/parquet/datatypes/float16XFail 295ms
ClickHouse does not import FLOAT16 properly
AssertionError
Traceback (most recent call last):
  File "/usr/lib/python3.12/threading.py", line 1030, in _bootstrap
    self._bootstrap_inner()
  File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner
    self.run()
  File "/usr/lib/python3.12/threading.py", line 1010, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/lib/python3.12/concurrent/futures/thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/datatypes.py", line 929, in feature
    scenario()
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/datatypes.py", line 113, in float16
    assert output == expected, error()
           ^^^^^^^^^^^^^^^^^^
AssertionError: Oops! Assertion failed

The following assertion was not satisfied
  assert output == expected, error()

Assertion values
  assert output == expected, error()
         ^ is '[-2,-1,0,1,2,3,4,5,6,7,8]'
  assert output == expected, error()
                   ^ is '[-2,-1,0,1,2,3,4,5,6,7,8,9]'
  assert output == expected, error()
                ^ is = False
    @@ -1 +1 @@
    -[-2,-1,0,1,2,3,4,5,6,7,8]
    +[-2,-1,0,1,2,3,4,5,6,7,8,9]
  assert output == expected, error()
  ^ is False

Where
  File '/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/datatypes.py', line 113 in 'float16'

105\|                  ORDER BY tuple() AS SELECT floatfield FROM file('{import_file}', Parquet)
106\|                  """
107\|              )
108\|  
109\|          with Then("I read the contents of the created table"):
110\|              output = node.query(
111\|                  f"SELECT groupArray(round(*)) FROM {table_name} FORMAT TSV"
112\|              ).output
113\|>             assert output == expected, error()
114\|  
115\|      finally:
116\|          with Finally("I drop the table"):
/parquet/datatypes/large string mapXFail 5s 711ms
Will fail until the, https://github.com/apache/arrow/pull/35825, gets merged.
AssertionError
Traceback (most recent call last):
  File "/usr/lib/python3.12/threading.py", line 1030, in _bootstrap
    self._bootstrap_inner()
  File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner
    self.run()
  File "/usr/lib/python3.12/threading.py", line 1010, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/lib/python3.12/concurrent/futures/thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/datatypes.py", line 929, in feature
    scenario()
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/datatypes.py", line 801, in large_string_map
    import_export(snapshot_name="large_string_map_structure", import_file=import_file)
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/outline.py", line 37, in import_export
    node.query(
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../helpers/cluster.py", line 1163, in query
    assert False, error(r.output)
           ^^^^^
AssertionError: Oops! Assertion failed

The following assertion was not satisfied
  assert False, error(r.output)

Description
  Received exception from server (version 25.8.16):
Code: 33. DB::Exception: Received from localhost:9000. DB::Exception: Error while reading Parquet data: NotImplemented: Nested data conversions not implemented for chunked array outputs: (in file/uri /var/lib/clickhouse/user_files/arrow/large_string_map.brotli.parquet): While executing ParquetBlockInputFormat: While executing File. (CANNOT_READ_ALL_DATA)
(query: CREATE TABLE table_d64fdaf6_0cf0_11f1_83f8_9200073c9f1a
            ENGINE = MergeTree
            ORDER BY tuple() AS SELECT * FROM file('arrow/large_string_map.brotli.parquet', Parquet) LIMIT 100 FORMAT TabSeparated
            )

Assertion values
  assert False, error(r.output)
  ^ is False

Where
  File '/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../helpers/cluster.py', line 1163 in 'query'

1155\|                  assert message in r.output, error(r.output)
1156\|  
1157\|          if not ignore_exception:
1158\|              if message is None or "Exception:" not in message:
1159\|                  with Then("check if output has exception") if steps else NullStep():
1160\|                      if "Exception:" in r.output:
1161\|                          if raise_on_exception:
1162\|                              raise QueryRuntimeException(r.output)
1163\|>                         assert False, error(r.output)
1164\|  
1165\|          return r
1166\|

Results

Test Name Result Duration
/parquet OK 1h 11m
/parquet/file OK 11m 41s
/parquet/file/engine OK 11m 41s
/parquet/file/engine/insert into engine OK 5m 23s
/parquet/file/function OK 5m 19s
/parquet/file/engine/select from engine OK 2m 32s
/parquet/file/function/insert into function manual cast types OK 4m 55s
/parquet/file/function/insert into function auto cast types OK 5m 19s
/parquet/file/engine/engine to file to engine OK 8m 41s
/parquet/file/function/select from function manual cast types OK 3m 5s
/parquet/file/function/select from function auto cast types OK 2m 35s
/parquet/file/engine/insert into engine from file OK 4m 56s
/parquet/file/function/date as uint16 OK 3s 0ms
/parquet/file/engine/engine select output to file OK 11m 41s
/parquet/file/function/date as uint16 multiple dates OK 1s 710ms
/parquet/file/function/date as uint16 nullable OK 1s 184ms
/parquet/file/function/date as uint16 round trip OK 2s 103ms
/parquet/file/function/date as uint16 edge cases OK 901ms
/parquet/file/function/date as uint16 with other columns OK 602ms
/parquet/query OK 21m 14s
/parquet/query/compression type OK 21m 14s
/parquet/query/compression type/=NONE OK 21m 11s
/parquet/query/compression type/=NONE /insert into memory table from file OK 1m 5s
/parquet/query/compression type/=GZIP OK 21m 14s
/parquet/query/compression type/=GZIP /insert into memory table from file OK 1m 5s
/parquet/query/compression type/=LZ4 OK 21m 13s
/parquet/query/compression type/=LZ4 /insert into memory table from file OK 1m 5s
/parquet/query/compression type/=LZ4 /insert into mergetree table from file OK 1m 5s
/parquet/query/compression type/=NONE /insert into mergetree table from file OK 1m 4s
/parquet/query/compression type/=GZIP /insert into mergetree table from file OK 1m 4s
/parquet/query/compression type/=NONE /insert into replicated mergetree table from file OK 1m 6s
/parquet/query/compression type/=GZIP /insert into replicated mergetree table from file OK 1m 6s
/parquet/query/compression type/=LZ4 /insert into replicated mergetree table from file OK 1m 5s
/parquet/query/compression type/=GZIP /insert into distributed table from file OK 1m 10s
/parquet/query/compression type/=LZ4 /insert into distributed table from file OK 1m 10s
/parquet/query/compression type/=NONE /insert into distributed table from file OK 1m 10s
/parquet/query/compression type/=LZ4 /select from memory table into file OK 3m 8s
/parquet/query/compression type/=NONE /select from memory table into file OK 3m 5s
/parquet/query/compression type/=GZIP /select from memory table into file OK 3m 6s
/parquet/query/compression type/=NONE /select from mergetree table into file OK 3m 4s
/parquet/query/compression type/=GZIP /select from mergetree table into file OK 3m 4s
/parquet/query/compression type/=LZ4 /select from mergetree table into file OK 3m 3s
/parquet/query/compression type/=NONE /select from replicated mergetree table into file OK 3m 8s
/parquet/query/compression type/=GZIP /select from replicated mergetree table into file OK 3m 9s
/parquet/query/compression type/=LZ4 /select from replicated mergetree table into file OK 3m 9s
/parquet/query/compression type/=NONE /select from distributed table into file OK 3m 26s
/parquet/query/compression type/=GZIP /select from distributed table into file OK 3m 26s
/parquet/query/compression type/=LZ4 /select from distributed table into file OK 3m 26s
/parquet/query/compression type/=NONE /select from mat view into file OK 3m 2s
/parquet/query/compression type/=LZ4 /select from mat view into file OK 3m 4s
/parquet/query/compression type/=GZIP /select from mat view into file OK 3m 6s
/parquet/query/compression type/=NONE /insert into table with projection from file OK 54s 808ms
/parquet/query/compression type/=LZ4 /insert into table with projection from file OK 53s 510ms
/parquet/query/compression type/=GZIP /insert into table with projection from file OK 53s 56ms
/parquet/list in multiple chunks OK 18s 414ms
/parquet/url OK 11m 34s
/parquet/url/engine OK 11m 1s
/parquet/url/function OK 4m 46s
/parquet/url/engine/insert into engine OK 4m 28s
/parquet/url/function/insert into function OK 3m 19s
/parquet/url/engine/select from engine OK 1m 15s
/parquet/url/engine/engine to file to engine OK 7m 52s
/parquet/url/function/select from function manual cast types OK 4m 45s
/parquet/url/function/select from function auto cast types OK 3m 18s
/parquet/url/engine/insert into engine from file OK 7m 7s
/parquet/url/engine/engine select output to file OK 11m 0s
/parquet/mysql OK 10s 393ms
/parquet/mysql/compression type OK 10s 392ms
/parquet/mysql/compression type/=NONE OK 10s 268ms
/parquet/mysql/compression type/=NONE /mysql engine to parquet file to mysql engine OK 6s 82ms
/parquet/mysql/compression type/=GZIP OK 10s 377ms
/parquet/mysql/compression type/=GZIP /mysql engine to parquet file to mysql engine OK 6s 65ms
/parquet/mysql/compression type/=LZ4 OK 10s 375ms
/parquet/mysql/compression type/=LZ4 /mysql engine to parquet file to mysql engine OK 6s 50ms
/parquet/mysql/compression type/=LZ4 /mysql function to parquet file to mysql function OK 4s 322ms
/parquet/mysql/compression type/=GZIP /mysql function to parquet file to mysql function OK 4s 310ms
/parquet/mysql/compression type/=NONE /mysql function to parquet file to mysql function OK 4s 184ms
/parquet/postgresql OK 12s 70ms
/parquet/postgresql/compression type OK 12s 68ms
/parquet/postgresql/compression type/=NONE OK 12s 64ms
/parquet/postgresql/compression type/=NONE /postgresql engine to parquet file to postgresql engine XFail 7s 336ms
/parquet/postgresql/compression type/=GZIP OK 12s 60ms
/parquet/postgresql/compression type/=GZIP /postgresql engine to parquet file to postgresql engine XFail 7s 236ms
/parquet/postgresql/compression type/=LZ4 OK 11s 949ms
/parquet/postgresql/compression type/=LZ4 /postgresql engine to parquet file to postgresql engine XFail 7s 239ms
/parquet/postgresql/compression type/=GZIP /postgresql function to parquet file to postgresql function OK 4s 820ms
/parquet/postgresql/compression type/=LZ4 /postgresql function to parquet file to postgresql function OK 4s 707ms
/parquet/postgresql/compression type/=NONE /postgresql function to parquet file to postgresql function OK 4s 725ms
/parquet/remote OK 4m 33s
/parquet/remote/compression type OK 4m 33s
/parquet/remote/compression type/=NONE OK 4m 31s
/parquet/remote/compression type/=NONE /outline OK 4m 31s
/parquet/remote/compression type/=NONE /outline/insert into function OK 1m 27s
/parquet/remote/compression type/=GZIP OK 4m 31s
/parquet/remote/compression type/=GZIP /outline OK 4m 31s
/parquet/remote/compression type/=GZIP /outline/insert into function OK 1m 26s
/parquet/remote/compression type/=LZ4 OK 4m 33s
/parquet/remote/compression type/=LZ4 /outline OK 4m 33s
/parquet/remote/compression type/=LZ4 /outline/insert into function OK 1m 27s
/parquet/remote/compression type/=GZIP /outline/select from function OK 3m 4s
/parquet/remote/compression type/=LZ4 /outline/select from function OK 3m 6s
/parquet/remote/compression type/=NONE /outline/select from function OK 3m 4s
/parquet/chunked array XFail 14s 448ms
/parquet/broken OK 16ms
/parquet/broken/file Skip 1ms
/parquet/broken/read broken bigint Skip 886us
/parquet/broken/read broken date Skip 849us
/parquet/broken/read broken int Skip 1ms
/parquet/broken/read broken smallint Skip 787us
/parquet/broken/read broken timestamp ms Skip 865us
/parquet/broken/read broken timestamp us Skip 854us
/parquet/broken/read broken tinyint Skip 1ms
/parquet/broken/read broken ubigint Skip 877us
/parquet/broken/read broken uint Skip 830us
/parquet/broken/read broken usmallint Skip 828us
/parquet/broken/read broken utinyint Skip 1ms
/parquet/broken/string Skip 847us
/parquet/encoding OK 8s 258ms
/parquet/encoding/deltabytearray1 OK 1s 573ms
/parquet/encoding/deltabytearray2 OK 1s 150ms
/parquet/encoding/deltalengthbytearray OK 1s 119ms
/parquet/encoding/dictionary OK 1s 57ms
/parquet/encoding/plain OK 1s 56ms
/parquet/encoding/plainrlesnappy OK 1s 150ms
/parquet/encoding/rleboolean OK 1s 147ms
/parquet/compression OK 24s 73ms
/parquet/compression/arrow snappy OK 1s 145ms
/parquet/compression/brotli OK 1s 163ms
/parquet/compression/gzippages OK 2s 306ms
/parquet/compression/largegzip OK 1s 174ms
/parquet/compression/lz4 hadoop OK 1s 158ms
/parquet/compression/lz4 hadoop large OK 1s 109ms
/parquet/compression/lz4 non hadoop OK 1s 139ms
/parquet/compression/lz4 raw OK 1s 138ms
/parquet/compression/lz4 raw large OK 1s 155ms
/parquet/compression/lz4pages OK 2s 289ms
/parquet/compression/nonepages OK 2s 317ms
/parquet/compression/snappypages OK 2s 313ms
/parquet/compression/snappyplain OK 1s 59ms
/parquet/compression/snappyrle OK 1s 158ms
/parquet/compression/zstd OK 1s 135ms
/parquet/compression/zstdpages OK 2s 301ms
/parquet/datatypes OK 1m 17s
/parquet/datatypes/arrowtimestamp OK 1s 29ms
/parquet/datatypes/arrowtimestampms OK 1s 41ms
/parquet/datatypes/binary OK 1s 141ms
/parquet/datatypes/binary string OK 1s 126ms
/parquet/datatypes/blob OK 1s 128ms
/parquet/datatypes/boolean OK 1s 142ms
/parquet/datatypes/byte array OK 1s 102ms
/parquet/datatypes/columnname OK 1s 137ms
/parquet/datatypes/columnwithnull OK 1s 174ms
/parquet/datatypes/columnwithnull2 OK 1s 101ms
/parquet/datatypes/date OK 1s 154ms
/parquet/datatypes/decimal with filter OK 1s 151ms
/parquet/datatypes/decimalvariousfilters OK 1s 132ms
/parquet/datatypes/decimalwithfilter2 OK 1s 124ms
/parquet/datatypes/enum OK 1s 176ms
/parquet/datatypes/enum2 OK 1s 148ms
/parquet/datatypes/fixed length decimal OK 1s 123ms
/parquet/datatypes/fixed length decimal legacy OK 1s 96ms
/parquet/datatypes/fixedstring OK 1s 141ms
/parquet/datatypes/float16 XFail 295ms
/parquet/datatypes/h2oai OK 1s 182ms
/parquet/datatypes/hive OK 2s 235ms
/parquet/datatypes/int32 OK 1s 131ms
/parquet/datatypes/int32 decimal OK 1s 102ms
/parquet/datatypes/int64 OK 1s 135ms
/parquet/datatypes/int64 decimal OK 1s 137ms
/parquet/datatypes/json OK 1s 81ms
/parquet/datatypes/large string map XFail 5s 711ms
/parquet/datatypes/largedouble OK 1s 262ms
/parquet/datatypes/manydatatypes OK 1s 106ms
/parquet/datatypes/manydatatypes2 OK 1s 186ms
/parquet/datatypes/maps OK 1s 147ms
/parquet/datatypes/nameswithemoji OK 1s 157ms
/parquet/datatypes/nandouble OK 1s 127ms
/parquet/datatypes/negativeint64 OK 1s 30ms
/parquet/datatypes/nullbyte OK 1s 165ms
/parquet/datatypes/nullbytemultiple OK 1s 134ms
/parquet/datatypes/nullsinid OK 1s 116ms
/parquet/datatypes/pandasdecimal OK 1s 171ms
/parquet/datatypes/pandasdecimaldate OK 1s 140ms
/parquet/datatypes/parquetgo OK 1s 91ms
/parquet/datatypes/selectdatewithfilter OK 6s 29ms
/parquet/datatypes/singlenull OK 1s 126ms
/parquet/datatypes/sparkv21 OK 1s 202ms
/parquet/datatypes/sparkv22 OK 1s 129ms
/parquet/datatypes/statdecimal OK 1s 132ms
/parquet/datatypes/string OK 1s 138ms
/parquet/datatypes/string int list inconsistent offset multiple batches OK 5s 344ms
/parquet/datatypes/stringtypes OK 1s 110ms
/parquet/datatypes/struct OK 1s 128ms
/parquet/datatypes/supporteduuid OK 1s 122ms
/parquet/datatypes/timestamp1 OK 1s 20ms
/parquet/datatypes/timestamp2 OK 1s 67ms
/parquet/datatypes/timezone OK 1s 17ms
/parquet/datatypes/unsigned OK 2s 289ms
/parquet/datatypes/unsupportednull OK 193ms
/parquet/complex OK 23s 458ms
/parquet/complex/arraystring OK 1s 159ms
/parquet/complex/big tuple with nulls OK 1s 188ms
/parquet/complex/bytearraydictionary OK 1s 150ms
/parquet/complex/complex null OK 1s 145ms
/parquet/complex/lagemap OK 1s 140ms
/parquet/complex/largenestedarray OK 1s 158ms
/parquet/complex/largestruct OK 1s 147ms
/parquet/complex/largestruct2 OK 1s 501ms
/parquet/complex/largestruct3 OK 1s 69ms
/parquet/complex/list OK 1s 136ms
/parquet/complex/nested array OK 1s 168ms
/parquet/complex/nested map OK 1s 136ms
/parquet/complex/nestedallcomplex OK 1s 270ms
/parquet/complex/nestedarray2 OK 1s 126ms
/parquet/complex/nestedstruct OK 1s 112ms
/parquet/complex/nestedstruct2 OK 1s 142ms
/parquet/complex/nestedstruct3 OK 1s 121ms
/parquet/complex/nestedstruct4 OK 1s 314ms
/parquet/complex/tupleofnulls OK 1s 191ms
/parquet/complex/tuplewithdatetime OK 1s 67ms
/parquet/cache OK 2s 305ms
/parquet/cache/cache1 OK 1s 139ms
/parquet/cache/cache2 OK 1s 164ms
/parquet/glob OK 39s 272ms
/parquet/glob/fastparquet globs OK 739ms
/parquet/glob/glob1 OK 1s 260ms
/parquet/glob/glob2 OK 1s 551ms
/parquet/glob/glob with multiple elements OK 306ms
/parquet/glob/million extensions OK 35s 411ms
/parquet/rowgroups OK 2s 322ms
/parquet/rowgroups/manyrowgroups OK 1s 171ms
/parquet/rowgroups/manyrowgroups2 OK 1s 149ms
/parquet/encrypted Skip 1ms
/parquet/fastparquet OK 7ms
/parquet/fastparquet/airlines Skip 898us
/parquet/fastparquet/baz Skip 809us
/parquet/fastparquet/empty date Skip 1ms
/parquet/fastparquet/evo Skip 920us
/parquet/fastparquet/fastparquet Skip 777us
/parquet/read and write OK 14m 22s
/parquet/read and write/read and write parquet file OK 14m 22s
/parquet/column related errors OK 657ms
/parquet/column related errors/check error with 500 columns OK 656ms
/parquet/multi chunk upload Skip 1ms

Generated by TestFlows Open-Source Test Framework v2.0.250110.1002922