Parquet Test Run Report

DateApr 01, 2025 23:27
Duration1h 32m
Framework TestFlows 2.0.250110.1002922

Artifacts

Test artifacts can be found at https://altinity-build-artifacts.s3.amazonaws.com/index.html#708/12fb72fa077bd3f529f48fabb290d280d46348de/regression/x86_64/with_analyzer/zookeeper/without_thread_fuzzer/parquetminio/

Attributes

projectAltinity/ClickHouse
project.id159717931
packagehttps://s3.amazonaws.com/altinity-build-artifacts/PRs/708/12fb72fa077bd3f529f48fabb290d280d46348de/package_release/clickhouse-common-static_24.12.2.20221.altinityantalya_amd64.deb
version24.12.2.20221.altinityantalya
user.nameianton-ru
repositoryhttps://github.com/Altinity/clickhouse-regression
commit.hashbd31e738c0cedaca253d15a05ed245c41b6e0b6a
job.nameParquetS3
job.retry1
job.urlhttps://github.com/Altinity/ClickHouse/actions/runs/14205957338
archx86_64
localTrue
clickhouse_versionNone
clickhouse_pathhttps://s3.amazonaws.com/altinity-build-artifacts/PRs/708/12fb72fa077bd3f529f48fabb290d280d46348de/package_release/clickhouse-common-static_24.12.2.20221.altinityantalya_amd64.deb
as_binaryFalse
base_osNone
keeper_pathNone
zookeeper_versionNone
use_keeperFalse
stressFalse
collect_service_logsTrue
thread_fuzzerFalse
with_analyzerTrue
reuse_envFalse
storages['minio']
minio_uriSecret(name='minio_uri')
minio_root_userSecret(name='minio_root_user')
minio_root_passwordSecret(name='minio_root_password')
aws_s3_bucketSecret(name='aws_s3_bucket')
aws_s3_regionSecret(name='aws_s3_region')
aws_s3_key_idSecret(name='aws_s3_key_id')
aws_s3_access_keySecret(name='aws_s3_access_key')
gcs_uriNone
gcs_key_idNone
gcs_key_secretNone
azure_account_nameNone
azure_storage_keyNone
azure_containerNone
native_parquet_readerFalse
stress_bloomFalse

Summary

99.9%OK
<1%Known

Statistics

Units Skip OK Fail XFail
Modules
1
1
Suites
8
8
Features
45
2
42
1
Scenarios
241
33
203
5
Checks
67591
67591
Examples
12
12
Steps
410980
34
408802
14
2130

Known Fails

Test NameResultMessage
/parquet/postgresql/compression type/=NONE /postgresql engine to parquet file to postgresql engineXFail 23s 292ms
This fails because of the difference in snapshot values. We used to capture the datetime value `0` be converted as 2106-02-07 06:28:16 instead of the correct 1970-01-01 01:00:00. But when steps are repeated manually, we can not reproduce it
AssertionError
Traceback (most recent call last):
  File "/usr/lib/python3.12/threading.py", line 1030, in _bootstrap
    self._bootstrap_inner()
  File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner
    self.run()
  File "/usr/lib/python3.12/threading.py", line 1010, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/lib/python3.12/concurrent/futures/thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py", line 827, in execute_query_step
    execute_query(
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py", line 897, in execute_query
    assert that(snapshot_result), error()
           ^^^^^^^^^^^^^^^^^^^^^
AssertionError: Oops! Assertion failed

The following assertion was not satisfied
  assert that(snapshot_result), error()

Assertion values
  assert that(snapshot_result), error()
         ^ is = SnapshotError(
    filename=/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/snapshots/common.py.postgresql engine to parquet file to postgresql engine.snapshot
    name=_parquet_postgresql_compression_type__NONE__postgresql_engine_to_parquet_file_to_postgresql_engine_I_check_the_data_on_the_table_datetime
    snapshot_value="""

        {"datetime":"2106-02-07 06:28:15","toTypeName(datetime)":"DateTime"}
        {"datetime":"2106-02-07 06:28:16","toTypeName(datetime)":"DateTime"}
        {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"}
        {"datetime":"2006-11-27 02:50:49","toTypeName(datetime)":"DateTime"}
        {"datetime":"2015-06-29 09:43:07","toTypeName(datetime)":"DateTime"}
        {"datetime":"2011-09-30 02:34:30","toTypeName(datetime)":"DateTime"}
        {"datetime":"2019-04-27 02:01:34","toTypeName(datetime)":"DateTime"}
        {"datetime":"2017-02-11 09:09:25","toTypeName(datetime)":"DateTime"}
        {"datetime":"2009-06-14 11:12:39","toTypeName(datetime)":"DateTime"}
        {"datetime":"2020-01-24 15:10:50","toTypeName(datetime)":"DateTime"}
    """,
    actual_value="""

        {"datetime":"2106-02-07 06:28:15","toTypeName(datetime)":"DateTime"}
        {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"}
        {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"}
        {"datetime":"2006-11-27 02:50:49","toTypeName(datetime)":"DateTime"}
        {"datetime":"2015-06-29 09:43:07","toTypeName(datetime)":"DateTime"}
        {"datetime":"2011-09-30 02:34:30","toTypeName(datetime)":"DateTime"}
        {"datetime":"2019-04-27 02:01:34","toTypeName(datetime)":"DateTime"}
        {"datetime":"2017-02-11 09:09:25","toTypeName(datetime)":"DateTime"}
        {"datetime":"2009-06-14 11:12:39","toTypeName(datetime)":"DateTime"}
        {"datetime":"2020-01-24 15:10:50","toTypeName(datetime)":"DateTime"}
    """,
    diff="""
        --- /home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/snapshots/common.py.postgresql engine to parquet file to postgresql engine.snapshot
        +++ 
        @@ -1,6 +1,6 @@

         {"datetime":"2106-02-07 06:28:15","toTypeName(datetime)":"DateTime"}
        -{"datetime":"2106-02-07 06:28:16","toTypeName(datetime)":"DateTime"}
        +{"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"}
         {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"}
         {"datetime":"2006-11-27 02:50:49","toTypeName(datetime)":"DateTime"}
         {"datetime":"2015-06-29 09:43:07","toTypeName(datetime)":"DateTime"}
    """)
  assert that(snapshot_result), error()
  ^ is False

Where
  File '/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py', line 897 in 'execute_query'

889\|                  with values() as that:
890\|                      snapshot_result = snapshot(
891\|                          "\n" + r.output.strip() + "\n",
892\|                          id=snapshot_id,
893\|                          name=snapshot_name,
894\|                          encoder=str,
895\|                          mode=snapshot.CHECK,
896\|                      )
897\|>                     assert that(snapshot_result), error()
/parquet/postgresql/compression type/=GZIP /postgresql engine to parquet file to postgresql engineXFail 25s 385ms
This fails because of the difference in snapshot values. We used to capture the datetime value `0` be converted as 2106-02-07 06:28:16 instead of the correct 1970-01-01 01:00:00. But when steps are repeated manually, we can not reproduce it
AssertionError
Traceback (most recent call last):
  File "/usr/lib/python3.12/threading.py", line 1030, in _bootstrap
    self._bootstrap_inner()
  File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner
    self.run()
  File "/usr/lib/python3.12/threading.py", line 1010, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/lib/python3.12/concurrent/futures/thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py", line 827, in execute_query_step
    execute_query(
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py", line 897, in execute_query
    assert that(snapshot_result), error()
           ^^^^^^^^^^^^^^^^^^^^^
AssertionError: Oops! Assertion failed

The following assertion was not satisfied
  assert that(snapshot_result), error()

Assertion values
  assert that(snapshot_result), error()
         ^ is = SnapshotError(
    filename=/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/snapshots/common.py.postgresql engine to parquet file to postgresql engine.snapshot
    name=_parquet_postgresql_compression_type__GZIP__postgresql_engine_to_parquet_file_to_postgresql_engine_I_check_the_data_on_the_table_datetime
    snapshot_value="""

        {"datetime":"2106-02-07 06:28:15","toTypeName(datetime)":"DateTime"}
        {"datetime":"2106-02-07 06:28:16","toTypeName(datetime)":"DateTime"}
        {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"}
        {"datetime":"2006-11-27 02:50:49","toTypeName(datetime)":"DateTime"}
        {"datetime":"2015-06-29 09:43:07","toTypeName(datetime)":"DateTime"}
        {"datetime":"2011-09-30 02:34:30","toTypeName(datetime)":"DateTime"}
        {"datetime":"2019-04-27 02:01:34","toTypeName(datetime)":"DateTime"}
        {"datetime":"2017-02-11 09:09:25","toTypeName(datetime)":"DateTime"}
        {"datetime":"2009-06-14 11:12:39","toTypeName(datetime)":"DateTime"}
        {"datetime":"2020-01-24 15:10:50","toTypeName(datetime)":"DateTime"}
    """,
    actual_value="""

        {"datetime":"2106-02-07 06:28:15","toTypeName(datetime)":"DateTime"}
        {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"}
        {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"}
        {"datetime":"2006-11-27 02:50:49","toTypeName(datetime)":"DateTime"}
        {"datetime":"2015-06-29 09:43:07","toTypeName(datetime)":"DateTime"}
        {"datetime":"2011-09-30 02:34:30","toTypeName(datetime)":"DateTime"}
        {"datetime":"2019-04-27 02:01:34","toTypeName(datetime)":"DateTime"}
        {"datetime":"2017-02-11 09:09:25","toTypeName(datetime)":"DateTime"}
        {"datetime":"2009-06-14 11:12:39","toTypeName(datetime)":"DateTime"}
        {"datetime":"2020-01-24 15:10:50","toTypeName(datetime)":"DateTime"}
    """,
    diff="""
        --- /home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/snapshots/common.py.postgresql engine to parquet file to postgresql engine.snapshot
        +++ 
        @@ -1,6 +1,6 @@

         {"datetime":"2106-02-07 06:28:15","toTypeName(datetime)":"DateTime"}
        -{"datetime":"2106-02-07 06:28:16","toTypeName(datetime)":"DateTime"}
        +{"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"}
         {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"}
         {"datetime":"2006-11-27 02:50:49","toTypeName(datetime)":"DateTime"}
         {"datetime":"2015-06-29 09:43:07","toTypeName(datetime)":"DateTime"}
    """)
  assert that(snapshot_result), error()
  ^ is False

Where
  File '/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py', line 897 in 'execute_query'

889\|                  with values() as that:
890\|                      snapshot_result = snapshot(
891\|                          "\n" + r.output.strip() + "\n",
892\|                          id=snapshot_id,
893\|                          name=snapshot_name,
894\|                          encoder=str,
895\|                          mode=snapshot.CHECK,
896\|                      )
897\|>                     assert that(snapshot_result), error()
/parquet/postgresql/compression type/=LZ4 /postgresql engine to parquet file to postgresql engineXFail 26s 630ms
This fails because of the difference in snapshot values. We used to capture the datetime value `0` be converted as 2106-02-07 06:28:16 instead of the correct 1970-01-01 01:00:00. But when steps are repeated manually, we can not reproduce it
AssertionError
Traceback (most recent call last):
  File "/usr/lib/python3.12/threading.py", line 1030, in _bootstrap
    self._bootstrap_inner()
  File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner
    self.run()
  File "/usr/lib/python3.12/threading.py", line 1010, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/lib/python3.12/concurrent/futures/thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py", line 827, in execute_query_step
    execute_query(
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py", line 897, in execute_query
    assert that(snapshot_result), error()
           ^^^^^^^^^^^^^^^^^^^^^
AssertionError: Oops! Assertion failed

The following assertion was not satisfied
  assert that(snapshot_result), error()

Assertion values
  assert that(snapshot_result), error()
         ^ is = SnapshotError(
    filename=/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/snapshots/common.py.postgresql engine to parquet file to postgresql engine.snapshot
    name=_parquet_postgresql_compression_type__LZ4__postgresql_engine_to_parquet_file_to_postgresql_engine_I_check_the_data_on_the_table_datetime
    snapshot_value="""

        {"datetime":"2106-02-07 06:28:15","toTypeName(datetime)":"DateTime"}
        {"datetime":"2106-02-07 06:28:16","toTypeName(datetime)":"DateTime"}
        {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"}
        {"datetime":"2006-11-27 02:50:49","toTypeName(datetime)":"DateTime"}
        {"datetime":"2015-06-29 09:43:07","toTypeName(datetime)":"DateTime"}
        {"datetime":"2011-09-30 02:34:30","toTypeName(datetime)":"DateTime"}
        {"datetime":"2019-04-27 02:01:34","toTypeName(datetime)":"DateTime"}
        {"datetime":"2017-02-11 09:09:25","toTypeName(datetime)":"DateTime"}
        {"datetime":"2009-06-14 11:12:39","toTypeName(datetime)":"DateTime"}
        {"datetime":"2020-01-24 15:10:50","toTypeName(datetime)":"DateTime"}
    """,
    actual_value="""

        {"datetime":"2106-02-07 06:28:15","toTypeName(datetime)":"DateTime"}
        {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"}
        {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"}
        {"datetime":"2006-11-27 02:50:49","toTypeName(datetime)":"DateTime"}
        {"datetime":"2015-06-29 09:43:07","toTypeName(datetime)":"DateTime"}
        {"datetime":"2011-09-30 02:34:30","toTypeName(datetime)":"DateTime"}
        {"datetime":"2019-04-27 02:01:34","toTypeName(datetime)":"DateTime"}
        {"datetime":"2017-02-11 09:09:25","toTypeName(datetime)":"DateTime"}
        {"datetime":"2009-06-14 11:12:39","toTypeName(datetime)":"DateTime"}
        {"datetime":"2020-01-24 15:10:50","toTypeName(datetime)":"DateTime"}
    """,
    diff="""
        --- /home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/snapshots/common.py.postgresql engine to parquet file to postgresql engine.snapshot
        +++ 
        @@ -1,6 +1,6 @@

         {"datetime":"2106-02-07 06:28:15","toTypeName(datetime)":"DateTime"}
        -{"datetime":"2106-02-07 06:28:16","toTypeName(datetime)":"DateTime"}
        +{"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"}
         {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"}
         {"datetime":"2006-11-27 02:50:49","toTypeName(datetime)":"DateTime"}
         {"datetime":"2015-06-29 09:43:07","toTypeName(datetime)":"DateTime"}
    """)
  assert that(snapshot_result), error()
  ^ is False

Where
  File '/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py', line 897 in 'execute_query'

889\|                  with values() as that:
890\|                      snapshot_result = snapshot(
891\|                          "\n" + r.output.strip() + "\n",
892\|                          id=snapshot_id,
893\|                          name=snapshot_name,
894\|                          encoder=str,
895\|                          mode=snapshot.CHECK,
896\|                      )
897\|>                     assert that(snapshot_result), error()
/parquet/chunked arrayXFail 41s 706ms
Not supported
AssertionError
Traceback (most recent call last):
  File "/usr/lib/python3.12/threading.py", line 1030, in _bootstrap
    self._bootstrap_inner()
  File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner
    self.run()
  File "/usr/lib/python3.12/threading.py", line 1010, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/lib/python3.12/concurrent/futures/thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/chunked_array.py", line 30, in feature
    node.query(
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../helpers/cluster.py", line 1188, in query
    assert False, error(r.output)
           ^^^^^
AssertionError: Oops! Assertion failed

The following assertion was not satisfied
  assert False, error(r.output)

Description
  Error on processing query: Code: 33. DB::Exception: Error while reading Parquet data: NotImplemented: Nested data conversions not implemented for chunked array outputs: (in file/uri /var/lib/clickhouse/user_files/chunked_array_test_file.parquet): While executing ParquetBlockInputFormat: While executing File: data for INSERT was parsed from file. (CANNOT_READ_ALL_DATA) (version 24.12.2.20221.altinityantalya (altinity build))
(query: INSERT INTO table_b68b07f0_0f57_11f0_9ec0_9600042fe93b FROM INFILE '/var/lib/clickhouse/user_files/chunked_array_test_file.parquet' FORMAT Parquet
)

Assertion values
  assert False, error(r.output)
  ^ is False

Where
  File '/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../helpers/cluster.py', line 1188 in 'query'

1180\|                  assert message in r.output, error(r.output)
1181\|  
1182\|          if not ignore_exception:
1183\|              if message is None or "Exception:" not in message:
1184\|                  with Then("check if output has exception") if steps else NullStep():
1185\|                      if "Exception:" in r.output:
1186\|                          if raise_on_exception:
1187\|                              raise QueryRuntimeException(r.output)
1188\|>                         assert False, error(r.output)
1189\|  
1190\|          return r
1191\|
/parquet/datatypes/float16XFail 1s 850ms
ClickHouse does not import FLOAT16 properly
AssertionError
Traceback (most recent call last):
  File "/usr/lib/python3.12/threading.py", line 1030, in _bootstrap
    self._bootstrap_inner()
  File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner
    self.run()
  File "/usr/lib/python3.12/threading.py", line 1010, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/lib/python3.12/concurrent/futures/thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/datatypes.py", line 929, in feature
    scenario()
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/datatypes.py", line 113, in float16
    assert output == expected, error()
           ^^^^^^^^^^^^^^^^^^
AssertionError: Oops! Assertion failed

The following assertion was not satisfied
  assert output == expected, error()

Assertion values
  assert output == expected, error()
         ^ is '[-0,0,32,2052,32838,0,0,0,0,0,0]'
  assert output == expected, error()
                   ^ is '[-2,-1,0,1,2,3,4,5,6,7,8,9]'
  assert output == expected, error()
                ^ is = False
    @@ -1 +1 @@
    -[-0,0,32,2052,32838,0,0,0,0,0,0]
    +[-2,-1,0,1,2,3,4,5,6,7,8,9]
  assert output == expected, error()
  ^ is False

Where
  File '/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/datatypes.py', line 113 in 'float16'

105\|                  ORDER BY tuple() AS SELECT floatfield FROM file('{import_file}', Parquet)
106\|                  """
107\|              )
108\|  
109\|          with Then("I read the contents of the created table"):
110\|              output = node.query(
111\|                  f"SELECT groupArray(round(*)) FROM {table_name} FORMAT TSV"
112\|              ).output
113\|>             assert output == expected, error()
114\|  
115\|      finally:
116\|          with Finally("I drop the table"):
/parquet/datatypes/large string mapXFail 13s 263ms
Will fail until the, https://github.com/apache/arrow/pull/35825, gets merged.
AssertionError
Traceback (most recent call last):
  File "/usr/lib/python3.12/threading.py", line 1030, in _bootstrap
    self._bootstrap_inner()
  File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner
    self.run()
  File "/usr/lib/python3.12/threading.py", line 1010, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/lib/python3.12/concurrent/futures/thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/datatypes.py", line 929, in feature
    scenario()
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/datatypes.py", line 801, in large_string_map
    import_export(snapshot_name="large_string_map_structure", import_file=import_file)
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/outline.py", line 36, in import_export
    node.query(
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../helpers/cluster.py", line 1188, in query
    assert False, error(r.output)
           ^^^^^
AssertionError: Oops! Assertion failed

The following assertion was not satisfied
  assert False, error(r.output)

Description
  Received exception from server (version 24.12.2):
Code: 33. DB::Exception: Received from localhost:9000. DB::Exception: Error while reading Parquet data: NotImplemented: Nested data conversions not implemented for chunked array outputs: (in file/uri /var/lib/clickhouse/user_files/arrow/large_string_map.brotli.parquet): While executing ParquetBlockInputFormat: While executing File. (CANNOT_READ_ALL_DATA)
(query: CREATE TABLE table_5aef470f_0f58_11f0_8b81_9600042fe93b
            ENGINE = MergeTree
            ORDER BY tuple() AS SELECT * FROM file('arrow/large_string_map.brotli.parquet', Parquet) LIMIT 100 FORMAT TabSeparated
            )

Assertion values
  assert False, error(r.output)
  ^ is False

Where
  File '/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../helpers/cluster.py', line 1188 in 'query'

1180\|                  assert message in r.output, error(r.output)
1181\|  
1182\|          if not ignore_exception:
1183\|              if message is None or "Exception:" not in message:
1184\|                  with Then("check if output has exception") if steps else NullStep():
1185\|                      if "Exception:" in r.output:
1186\|                          if raise_on_exception:
1187\|                              raise QueryRuntimeException(r.output)
1188\|>                         assert False, error(r.output)
1189\|  
1190\|          return r
1191\|

Results

Test Name Result Duration
/parquet OK 1h 32m
/parquet/file OK 54m 21s
/parquet/file/engine OK 54m 21s
/parquet/file/engine/insert into engine OK 31m 45s
/parquet/file/function OK 24m 43s
/parquet/file/engine/select from engine OK 11m 26s
/parquet/file/function/insert into function manual cast types OK 23m 38s
/parquet/file/function/insert into function auto cast types OK 24m 43s
/parquet/file/engine/engine to file to engine OK 46m 49s
/parquet/file/function/select from function manual cast types OK 12m 45s
/parquet/file/engine/insert into engine from file OK 30m 14s
/parquet/file/function/select from function auto cast types OK 11m 27s
/parquet/file/engine/engine select output to file OK 54m 21s
/parquet/query OK 1h 9m
/parquet/list in multiple chunks OK 11m 6s
/parquet/query/compression type OK 1h 9m
/parquet/url OK 55m 55s
/parquet/query/compression type/=NONE OK 1h 9m
/parquet/query/compression type/=GZIP OK 1h 9m
/parquet/query/compression type/=LZ4 OK 1h 9m
/parquet/query/compression type/=NONE /insert into memory table from file OK 11m 5s
/parquet/query/compression type/=GZIP /insert into memory table from file OK 10m 54s
/parquet/query/compression type/=LZ4 /insert into memory table from file OK 11m 5s
/parquet/url/engine OK 54m 47s
/parquet/url/function OK 26m 6s
/parquet/url/engine/insert into engine OK 32m 44s
/parquet/url/function/insert into function OK 23m 39s
/parquet/url/function/select from function manual cast types OK 26m 6s
/parquet/url/function/select from function auto cast types OK 23m 17s
/parquet/url/engine/select from engine OK 11m 29s
/parquet/url/engine/engine to file to engine OK 47m 3s
/parquet/url/engine/insert into engine from file OK 45m 15s
/parquet/url/engine/engine select output to file OK 54m 46s
/parquet/query/compression type/=GZIP /insert into mergetree table from file OK 10m 51s
/parquet/query/compression type/=NONE /insert into mergetree table from file OK 10m 50s
/parquet/query/compression type/=LZ4 /insert into mergetree table from file OK 10m 56s
/parquet/mysql OK 1m 12s
/parquet/mysql/compression type OK 1m 11s
/parquet/mysql/compression type/=NONE OK 1m 10s
/parquet/mysql/compression type/=LZ4 OK 1m 11s
/parquet/mysql/compression type/=GZIP OK 1m 10s
/parquet/mysql/compression type/=LZ4 /mysql engine to parquet file to mysql engine OK 41s 823ms
/parquet/mysql/compression type/=NONE /mysql engine to parquet file to mysql engine OK 40s 784ms
/parquet/mysql/compression type/=GZIP /mysql engine to parquet file to mysql engine OK 39s 339ms
/parquet/mysql/compression type/=GZIP /mysql function to parquet file to mysql function OK 31s 464ms
/parquet/mysql/compression type/=NONE /mysql function to parquet file to mysql function OK 30s 42ms
/parquet/mysql/compression type/=LZ4 /mysql function to parquet file to mysql function OK 29s 823ms
/parquet/postgresql OK 58s 705ms
/parquet/postgresql/compression type OK 58s 617ms
/parquet/postgresql/compression type/=NONE OK 52s 425ms
/parquet/postgresql/compression type/=GZIP OK 54s 542ms
/parquet/postgresql/compression type/=LZ4 OK 58s 532ms
/parquet/postgresql/compression type/=NONE /postgresql engine to parquet file to postgresql engine XFail 23s 292ms
/parquet/postgresql/compression type/=GZIP /postgresql engine to parquet file to postgresql engine XFail 25s 385ms
/parquet/postgresql/compression type/=LZ4 /postgresql engine to parquet file to postgresql engine XFail 26s 630ms
/parquet/postgresql/compression type/=NONE /postgresql function to parquet file to postgresql function OK 28s 879ms
/parquet/postgresql/compression type/=GZIP /postgresql function to parquet file to postgresql function OK 28s 907ms
/parquet/postgresql/compression type/=LZ4 /postgresql function to parquet file to postgresql function OK 31s 696ms
/parquet/remote OK 29m 0s
/parquet/remote/compression type OK 29m 0s
/parquet/remote/compression type/=NONE OK 28m 56s
/parquet/remote/compression type/=GZIP OK 28m 56s
/parquet/remote/compression type/=LZ4 OK 29m 0s
/parquet/remote/compression type/=LZ4 /outline OK 28m 59s
/parquet/remote/compression type/=GZIP /outline OK 28m 55s
/parquet/remote/compression type/=NONE /outline OK 28m 55s
/parquet/remote/compression type/=LZ4 /outline/insert into function OK 10m 47s
/parquet/remote/compression type/=NONE /outline/insert into function OK 10m 46s
/parquet/remote/compression type/=GZIP /outline/insert into function OK 10m 46s
/parquet/query/compression type/=GZIP /insert into replicated mergetree table from file OK 7m 48s
/parquet/query/compression type/=NONE /insert into replicated mergetree table from file OK 7m 46s
/parquet/query/compression type/=LZ4 /insert into replicated mergetree table from file OK 7m 42s
/parquet/remote/compression type/=NONE /outline/select from function OK 18m 9s
/parquet/remote/compression type/=GZIP /outline/select from function OK 18m 9s
/parquet/remote/compression type/=LZ4 /outline/select from function OK 18m 11s
/parquet/query/compression type/=GZIP /insert into distributed table from file OK 5m 58s
/parquet/query/compression type/=NONE /insert into distributed table from file OK 5m 55s
/parquet/query/compression type/=LZ4 /insert into distributed table from file OK 5m 55s
/parquet/query/compression type/=GZIP /select from memory table into file OK 12m 0s
/parquet/query/compression type/=NONE /select from memory table into file OK 11m 57s
/parquet/query/compression type/=LZ4 /select from memory table into file OK 11m 58s
/parquet/chunked array XFail 41s 706ms
/parquet/broken OK 827ms
/parquet/broken/file Skip 49ms
/parquet/broken/read broken bigint Skip 33ms
/parquet/broken/read broken date Skip 31ms
/parquet/broken/read broken int Skip 44ms
/parquet/broken/read broken smallint Skip 67ms
/parquet/broken/read broken timestamp ms Skip 43ms
/parquet/broken/read broken timestamp us Skip 18ms
/parquet/broken/read broken tinyint Skip 62ms
/parquet/broken/read broken ubigint Skip 39ms
/parquet/broken/read broken uint Skip 35ms
/parquet/broken/read broken usmallint Skip 22ms
/parquet/broken/read broken utinyint Skip 18ms
/parquet/broken/string Skip 25ms
/parquet/encoding OK 29s 508ms
/parquet/encoding/deltabytearray1 OK 5s 152ms
/parquet/encoding/deltabytearray2 OK 3s 982ms
/parquet/encoding/deltalengthbytearray OK 4s 60ms
/parquet/encoding/dictionary OK 3s 810ms
/parquet/encoding/plain OK 4s 275ms
/parquet/encoding/plainrlesnappy OK 4s 870ms
/parquet/encoding/rleboolean OK 3s 217ms
/parquet/compression OK 1m 22s
/parquet/compression/arrow snappy OK 3s 580ms
/parquet/compression/brotli OK 3s 573ms
/parquet/compression/gzippages OK 6s 979ms
/parquet/compression/largegzip OK 3s 601ms
/parquet/compression/lz4 hadoop OK 3s 893ms
/parquet/compression/lz4 hadoop large OK 4s 49ms
/parquet/compression/lz4 non hadoop OK 3s 827ms
/parquet/compression/lz4 raw OK 4s 5ms
/parquet/compression/lz4 raw large OK 3s 305ms
/parquet/compression/lz4pages OK 8s 50ms
/parquet/compression/nonepages OK 8s 370ms
/parquet/compression/snappypages OK 8s 220ms
/parquet/compression/snappyplain OK 3s 542ms
/parquet/compression/snappyrle OK 4s 458ms
/parquet/compression/zstd OK 4s 214ms
/parquet/compression/zstdpages OK 8s 56ms
/parquet/datatypes OK 4m 57s
/parquet/datatypes/arrowtimestamp OK 4s 255ms
/parquet/datatypes/arrowtimestampms OK 4s 471ms
/parquet/datatypes/binary OK 5s 301ms
/parquet/datatypes/binary string OK 5s 337ms
/parquet/datatypes/blob OK 4s 895ms
/parquet/datatypes/boolean OK 5s 439ms
/parquet/datatypes/byte array OK 5s 274ms
/parquet/datatypes/columnname OK 4s 951ms
/parquet/datatypes/columnwithnull OK 5s 744ms
/parquet/datatypes/columnwithnull2 OK 4s 533ms
/parquet/datatypes/date OK 4s 328ms
/parquet/datatypes/decimal with filter OK 5s 323ms
/parquet/datatypes/decimalvariousfilters OK 5s 19ms
/parquet/datatypes/decimalwithfilter2 OK 4s 779ms
/parquet/datatypes/enum OK 4s 567ms
/parquet/datatypes/enum2 OK 4s 175ms
/parquet/datatypes/fixed length decimal OK 3s 756ms
/parquet/datatypes/fixed length decimal legacy OK 3s 642ms
/parquet/datatypes/fixedstring OK 3s 556ms
/parquet/datatypes/float16 XFail 1s 850ms
/parquet/datatypes/h2oai OK 3s 603ms
/parquet/datatypes/hive OK 7s 784ms
/parquet/datatypes/int32 OK 4s 65ms
/parquet/datatypes/int32 decimal OK 3s 831ms
/parquet/datatypes/int64 OK 4s 106ms
/parquet/datatypes/int64 decimal OK 3s 89ms
/parquet/datatypes/json OK 2s 709ms
/parquet/datatypes/large string map XFail 13s 263ms
/parquet/datatypes/largedouble OK 3s 617ms
/parquet/datatypes/manydatatypes OK 2s 102ms
/parquet/datatypes/manydatatypes2 OK 2s 265ms
/parquet/datatypes/maps OK 2s 322ms
/parquet/datatypes/nameswithemoji OK 2s 487ms
/parquet/datatypes/nandouble OK 3s 543ms
/parquet/datatypes/negativeint64 OK 2s 268ms
/parquet/datatypes/nullbyte OK 2s 888ms
/parquet/datatypes/nullbytemultiple OK 2s 241ms
/parquet/datatypes/nullsinid OK 2s 571ms
/parquet/datatypes/pandasdecimal OK 2s 744ms
/parquet/query/compression type/=GZIP /select from mergetree table into file OK 6m 10s
/parquet/datatypes/pandasdecimaldate OK 2s 906ms
/parquet/query/compression type/=NONE /select from mergetree table into file OK 6m 7s
/parquet/datatypes/parquetgo OK 1s 908ms
/parquet/query/compression type/=LZ4 /select from mergetree table into file OK 6m 8s
/parquet/datatypes/selectdatewithfilter OK 1m 12s
/parquet/datatypes/singlenull OK 2s 919ms
/parquet/datatypes/sparkv21 OK 2s 737ms
/parquet/datatypes/sparkv22 OK 2s 286ms
/parquet/datatypes/statdecimal OK 2s 847ms
/parquet/datatypes/string OK 2s 233ms
/parquet/datatypes/string int list inconsistent offset multiple batches OK 19s 996ms
/parquet/datatypes/stringtypes OK 3s 289ms
/parquet/datatypes/struct OK 3s 23ms
/parquet/datatypes/supporteduuid OK 2s 570ms
/parquet/datatypes/timestamp1 OK 2s 512ms
/parquet/datatypes/timestamp2 OK 2s 705ms
/parquet/datatypes/timezone OK 2s 515ms
/parquet/datatypes/unsigned OK 5s 376ms
/parquet/datatypes/unsupportednull OK 950ms
/parquet/complex OK 56s 242ms
/parquet/complex/arraystring OK 3s 74ms
/parquet/complex/big tuple with nulls OK 3s 210ms
/parquet/complex/bytearraydictionary OK 3s 181ms
/parquet/complex/complex null OK 2s 959ms
/parquet/complex/lagemap OK 2s 650ms
/parquet/complex/largenestedarray OK 2s 833ms
/parquet/complex/largestruct OK 2s 561ms
/parquet/complex/largestruct2 OK 3s 122ms
/parquet/complex/largestruct3 OK 2s 379ms
/parquet/complex/list OK 2s 813ms
/parquet/complex/nested array OK 2s 659ms
/parquet/complex/nested map OK 2s 519ms
/parquet/complex/nestedallcomplex OK 2s 983ms
/parquet/complex/nestedarray2 OK 2s 708ms
/parquet/complex/nestedstruct OK 2s 949ms
/parquet/complex/nestedstruct2 OK 3s 161ms
/parquet/complex/nestedstruct3 OK 2s 517ms
/parquet/complex/nestedstruct4 OK 3s 186ms
/parquet/complex/tupleofnulls OK 2s 505ms
/parquet/complex/tuplewithdatetime OK 2s 124ms
/parquet/cache OK 5s 190ms
/parquet/cache/cache1 OK 2s 427ms
/parquet/cache/cache2 OK 2s 755ms
/parquet/glob OK 1m 26s
/parquet/glob/fastparquet globs OK 24s 255ms
/parquet/glob/glob1 OK 3s 430ms
/parquet/glob/glob2 OK 4s 338ms
/parquet/glob/glob with multiple elements OK 830ms
/parquet/glob/million extensions OK 54s 44ms
/parquet/rowgroups OK 5s 599ms
/parquet/rowgroups/manyrowgroups OK 2s 843ms
/parquet/rowgroups/manyrowgroups2 OK 2s 730ms
/parquet/encrypted Skip 6ms
/parquet/fastparquet OK 132ms
/parquet/fastparquet/airlines Skip 7ms
/parquet/fastparquet/baz Skip 6ms
/parquet/fastparquet/empty date Skip 8ms
/parquet/fastparquet/evo Skip 13ms
/parquet/fastparquet/fastparquet Skip 11ms
/parquet/read and write OK 23m 48s
/parquet/read and write/read and write parquet file OK 23m 48s
/parquet/query/compression type/=GZIP /select from replicated mergetree table into file OK 4m 48s
/parquet/query/compression type/=NONE /select from replicated mergetree table into file OK 4m 47s
/parquet/query/compression type/=LZ4 /select from replicated mergetree table into file OK 4m 47s
/parquet/column related errors OK 2s 733ms
/parquet/column related errors/check error with 500 columns OK 2s 718ms
/parquet/multi chunk upload Skip 9ms
/parquet/query/compression type/=NONE /select from distributed table into file OK 5m 32s
/parquet/query/compression type/=GZIP /select from distributed table into file OK 5m 29s
/parquet/query/compression type/=LZ4 /select from distributed table into file OK 5m 27s
/parquet/query/compression type/=LZ4 /select from mat view into file OK 4m 19s
/parquet/query/compression type/=GZIP /select from mat view into file OK 4m 16s
/parquet/query/compression type/=NONE /select from mat view into file OK 4m 17s
/parquet/query/compression type/=GZIP /insert into table with projection from file OK 1m 31s
/parquet/query/compression type/=NONE /insert into table with projection from file OK 1m 31s
/parquet/query/compression type/=LZ4 /insert into table with projection from file OK 1m 31s
/parquet/minio OK 9m 45s
/parquet/minio/s3 OK 9m 45s
/parquet/minio/s3/compression type OK 9m 45s
/parquet/minio/s3/compression type/=NONE OK 9m 45s
/parquet/minio/s3/compression type/=NONE /outline OK 9m 45s
/parquet/minio/s3/compression type/=NONE /outline/engine OK 47ms
/parquet/minio/s3/compression type/=NONE /outline/engine/insert into engine Skip 3ms
/parquet/minio/s3/compression type/=GZIP OK 9m 44s
/parquet/minio/s3/compression type/=GZIP /outline OK 9m 44s
/parquet/minio/s3/compression type/=GZIP /outline/engine OK 95ms
/parquet/minio/s3/compression type/=NONE /outline/engine/select from engine Skip 2ms
/parquet/minio/s3/compression type/=NONE /outline/engine/engine to file to engine Skip 1ms
/parquet/minio/s3/compression type/=GZIP /outline/engine/insert into engine Skip 2ms
/parquet/minio/s3/compression type/=NONE /outline/engine/insert into engine from file Skip 2ms
/parquet/minio/s3/compression type/=NONE /outline/engine/engine select output to file Skip 2ms
/parquet/minio/s3/compression type/=GZIP /outline/engine/select from engine Skip 2ms
/parquet/minio/s3/compression type/=GZIP /outline/engine/engine to file to engine Skip 2ms
/parquet/minio/s3/compression type/=LZ4 OK 9m 44s
/parquet/minio/s3/compression type/=LZ4 /outline OK 9m 44s
/parquet/minio/s3/compression type/=LZ4 /outline/engine OK 273ms
/parquet/minio/s3/compression type/=LZ4 /outline/engine/insert into engine Skip 2ms
/parquet/minio/s3/compression type/=NONE /outline/function OK 9m 45s
/parquet/minio/s3/compression type/=GZIP /outline/engine/insert into engine from file Skip 2ms
/parquet/minio/s3/compression type/=GZIP /outline/engine/engine select output to file Skip 3ms
/parquet/minio/s3/compression type/=LZ4 /outline/engine/select from engine Skip 2ms
/parquet/minio/s3/compression type/=LZ4 /outline/engine/engine to file to engine Skip 1ms
/parquet/minio/s3/compression type/=NONE /outline/function/insert into function OK 9m 45s
/parquet/minio/s3/compression type/=LZ4 /outline/engine/insert into engine from file Skip 2ms
/parquet/minio/s3/compression type/=LZ4 /outline/engine/engine select output to file Skip 52ms
/parquet/minio/s3/compression type/=NONE /outline/function/select from function manual cast types OK 9m 4s
/parquet/minio/s3/compression type/=GZIP /outline/function OK 9m 44s
/parquet/minio/s3/compression type/=NONE /outline/function/select from function auto cast types OK 8m 23s
/parquet/minio/s3/compression type/=GZIP /outline/function/insert into function OK 9m 44s
/parquet/minio/s3/compression type/=GZIP /outline/function/select from function manual cast types OK 9m 5s
/parquet/minio/s3/compression type/=GZIP /outline/function/select from function auto cast types OK 8m 25s
/parquet/minio/s3/compression type/=LZ4 /outline/function OK 9m 44s
/parquet/minio/s3/compression type/=LZ4 /outline/function/insert into function OK 9m 44s
/parquet/minio/s3/compression type/=LZ4 /outline/function/select from function manual cast types OK 9m 5s
/parquet/minio/s3/compression type/=LZ4 /outline/function/select from function auto cast types OK 8m 22s

Generated by TestFlows Open-Source Test Framework v2.0.250110.1002922