Copyright 2025, Altinity Inc.. All Rights Reserved. All information contained herein is, and remains the property of Altinity Inc.. Any dissemination of this information or reproduction of this material is strictly forbidden unless prior written permission is obtained from Altinity Inc..
Date | Jul 28, 2025 10:06 |
Duration | 38m 12s |
Framework | TestFlows 2.0.250110.1002922 |
Test artifacts can be found at https://altinity-build-artifacts.s3.amazonaws.com/index.html#0/d32d0074004db61e346611c777e26532a456fe2f/regression/x86_64/with_analyzer/zookeeper/without_thread_fuzzer/parquet/
project | Altinity/ClickHouse |
project.id | 159717931 |
package | https://s3.amazonaws.com/altinity-build-artifacts/25.3/d32d0074004db61e346611c777e26532a456fe2f/package_release/clickhouse-common-static_25.3.6.10034.altinitystable_amd64.deb |
version | 25.3.6.10034.altinitystable |
user.name | zvonand |
repository | https://github.com/Altinity/clickhouse-regression |
commit.hash | 5723e20cbc49b347114c7b90c7316a44dafa5328 |
job.name | Parquet |
job.retry | 1 |
job.url | https://github.com/Altinity/ClickHouse/actions/runs/16564498156 |
arch | x86_64 |
local | True |
clickhouse_version | None |
clickhouse_path | https://s3.amazonaws.com/altinity-build-artifacts/25.3/d32d0074004db61e346611c777e26532a456fe2f/package_release/clickhouse-common-static_25.3.6.10034.altinitystable_amd64.deb |
as_binary | False |
base_os | None |
keeper_path | None |
zookeeper_version | None |
use_keeper | False |
stress | False |
collect_service_logs | True |
thread_fuzzer | False |
with_analyzer | True |
reuse_env | False |
storages | None |
minio_uri | Secret(name='minio_uri') |
minio_root_user | Secret(name='minio_root_user') |
minio_root_password | Secret(name='minio_root_password') |
aws_s3_bucket | None |
aws_s3_region | Secret(name='aws_s3_region') |
aws_s3_key_id | Secret(name='aws_s3_key_id') |
aws_s3_access_key | Secret(name='aws_s3_access_key') |
gcs_uri | None |
gcs_key_id | None |
gcs_key_secret | None |
azure_account_name | None |
azure_storage_key | None |
azure_container | None |
native_parquet_reader | False |
stress_bloom | False |
Units | Skip | OK | Fail | XFail | |
---|---|---|---|---|---|
Modules | |||||
Suites | |||||
Features | |||||
Scenarios | |||||
Checks | |||||
Examples | |||||
Steps |
Test Name | Result | Message |
---|---|---|
/parquet/postgresql/compression type/=NONE /postgresql engine to parquet file to postgresql engine | XFail 15s 8ms This fails because of the difference in snapshot values. We used to capture the datetime value `0` be converted as 2106-02-07 06:28:16 instead of the correct 1970-01-01 01:00:00. But when steps are repeated manually, we can not reproduce it | AssertionError Traceback (most recent call last): File "/usr/lib/python3.12/threading.py", line 1030, in _bootstrap self._bootstrap_inner() File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner self.run() File "/usr/lib/python3.12/threading.py", line 1010, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.12/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py", line 828, in execute_query_step execute_query( File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py", line 898, in execute_query assert that(snapshot_result), error() ^^^^^^^^^^^^^^^^^^^^^ AssertionError: Oops! Assertion failed The following assertion was not satisfied assert that(snapshot_result), error() Assertion values assert that(snapshot_result), error() ^ is = SnapshotError( filename=/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/snapshots/common.py.postgresql engine to parquet file to postgresql engine.snapshot name=_parquet_postgresql_compression_type__NONE__postgresql_engine_to_parquet_file_to_postgresql_engine_I_check_the_data_on_the_table_datetime snapshot_value=""" {"datetime":"2106-02-07 06:28:15","toTypeName(datetime)":"DateTime"} {"datetime":"2106-02-07 06:28:16","toTypeName(datetime)":"DateTime"} {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"2006-11-27 02:50:49","toTypeName(datetime)":"DateTime"} {"datetime":"2015-06-29 09:43:07","toTypeName(datetime)":"DateTime"} {"datetime":"2011-09-30 02:34:30","toTypeName(datetime)":"DateTime"} {"datetime":"2019-04-27 02:01:34","toTypeName(datetime)":"DateTime"} {"datetime":"2017-02-11 09:09:25","toTypeName(datetime)":"DateTime"} {"datetime":"2009-06-14 11:12:39","toTypeName(datetime)":"DateTime"} {"datetime":"2020-01-24 15:10:50","toTypeName(datetime)":"DateTime"} """, actual_value=""" {"datetime":"2106-02-07 06:28:15","toTypeName(datetime)":"DateTime"} {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"2006-11-27 02:50:49","toTypeName(datetime)":"DateTime"} {"datetime":"2015-06-29 09:43:07","toTypeName(datetime)":"DateTime"} {"datetime":"2011-09-30 02:34:30","toTypeName(datetime)":"DateTime"} {"datetime":"2019-04-27 02:01:34","toTypeName(datetime)":"DateTime"} {"datetime":"2017-02-11 09:09:25","toTypeName(datetime)":"DateTime"} {"datetime":"2009-06-14 11:12:39","toTypeName(datetime)":"DateTime"} {"datetime":"2020-01-24 15:10:50","toTypeName(datetime)":"DateTime"} """, diff=""" --- /home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/snapshots/common.py.postgresql engine to parquet file to postgresql engine.snapshot +++ @@ -1,6 +1,6 @@ {"datetime":"2106-02-07 06:28:15","toTypeName(datetime)":"DateTime"} -{"datetime":"2106-02-07 06:28:16","toTypeName(datetime)":"DateTime"} +{"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"2006-11-27 02:50:49","toTypeName(datetime)":"DateTime"} {"datetime":"2015-06-29 09:43:07","toTypeName(datetime)":"DateTime"} """) assert that(snapshot_result), error() ^ is False Where File '/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py', line 898 in 'execute_query' 890\| with values() as that: 891\| snapshot_result = snapshot( 892\| "\n" + r.output.strip() + "\n", 893\| id=snapshot_id, 894\| name=snapshot_name, 895\| encoder=str, 896\| mode=snapshot.CHECK, 897\| ) 898\|> assert that(snapshot_result), error() |
/parquet/postgresql/compression type/=LZ4 /postgresql engine to parquet file to postgresql engine | XFail 15s 302ms This fails because of the difference in snapshot values. We used to capture the datetime value `0` be converted as 2106-02-07 06:28:16 instead of the correct 1970-01-01 01:00:00. But when steps are repeated manually, we can not reproduce it | AssertionError Traceback (most recent call last): File "/usr/lib/python3.12/threading.py", line 1030, in _bootstrap self._bootstrap_inner() File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner self.run() File "/usr/lib/python3.12/threading.py", line 1010, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.12/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py", line 828, in execute_query_step execute_query( File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py", line 898, in execute_query assert that(snapshot_result), error() ^^^^^^^^^^^^^^^^^^^^^ AssertionError: Oops! Assertion failed The following assertion was not satisfied assert that(snapshot_result), error() Assertion values assert that(snapshot_result), error() ^ is = SnapshotError( filename=/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/snapshots/common.py.postgresql engine to parquet file to postgresql engine.snapshot name=_parquet_postgresql_compression_type__LZ4__postgresql_engine_to_parquet_file_to_postgresql_engine_I_check_the_data_on_the_table_datetime snapshot_value=""" {"datetime":"2106-02-07 06:28:15","toTypeName(datetime)":"DateTime"} {"datetime":"2106-02-07 06:28:16","toTypeName(datetime)":"DateTime"} {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"2006-11-27 02:50:49","toTypeName(datetime)":"DateTime"} {"datetime":"2015-06-29 09:43:07","toTypeName(datetime)":"DateTime"} {"datetime":"2011-09-30 02:34:30","toTypeName(datetime)":"DateTime"} {"datetime":"2019-04-27 02:01:34","toTypeName(datetime)":"DateTime"} {"datetime":"2017-02-11 09:09:25","toTypeName(datetime)":"DateTime"} {"datetime":"2009-06-14 11:12:39","toTypeName(datetime)":"DateTime"} {"datetime":"2020-01-24 15:10:50","toTypeName(datetime)":"DateTime"} """, actual_value=""" {"datetime":"2106-02-07 06:28:15","toTypeName(datetime)":"DateTime"} {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"2006-11-27 02:50:49","toTypeName(datetime)":"DateTime"} {"datetime":"2015-06-29 09:43:07","toTypeName(datetime)":"DateTime"} {"datetime":"2011-09-30 02:34:30","toTypeName(datetime)":"DateTime"} {"datetime":"2019-04-27 02:01:34","toTypeName(datetime)":"DateTime"} {"datetime":"2017-02-11 09:09:25","toTypeName(datetime)":"DateTime"} {"datetime":"2009-06-14 11:12:39","toTypeName(datetime)":"DateTime"} {"datetime":"2020-01-24 15:10:50","toTypeName(datetime)":"DateTime"} """, diff=""" --- /home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/snapshots/common.py.postgresql engine to parquet file to postgresql engine.snapshot +++ @@ -1,6 +1,6 @@ {"datetime":"2106-02-07 06:28:15","toTypeName(datetime)":"DateTime"} -{"datetime":"2106-02-07 06:28:16","toTypeName(datetime)":"DateTime"} +{"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"2006-11-27 02:50:49","toTypeName(datetime)":"DateTime"} {"datetime":"2015-06-29 09:43:07","toTypeName(datetime)":"DateTime"} """) assert that(snapshot_result), error() ^ is False Where File '/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py', line 898 in 'execute_query' 890\| with values() as that: 891\| snapshot_result = snapshot( 892\| "\n" + r.output.strip() + "\n", 893\| id=snapshot_id, 894\| name=snapshot_name, 895\| encoder=str, 896\| mode=snapshot.CHECK, 897\| ) 898\|> assert that(snapshot_result), error() |
/parquet/postgresql/compression type/=GZIP /postgresql engine to parquet file to postgresql engine | XFail 14s 999ms This fails because of the difference in snapshot values. We used to capture the datetime value `0` be converted as 2106-02-07 06:28:16 instead of the correct 1970-01-01 01:00:00. But when steps are repeated manually, we can not reproduce it | AssertionError Traceback (most recent call last): File "/usr/lib/python3.12/threading.py", line 1030, in _bootstrap self._bootstrap_inner() File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner self.run() File "/usr/lib/python3.12/threading.py", line 1010, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.12/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py", line 828, in execute_query_step execute_query( File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py", line 898, in execute_query assert that(snapshot_result), error() ^^^^^^^^^^^^^^^^^^^^^ AssertionError: Oops! Assertion failed The following assertion was not satisfied assert that(snapshot_result), error() Assertion values assert that(snapshot_result), error() ^ is = SnapshotError( filename=/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/snapshots/common.py.postgresql engine to parquet file to postgresql engine.snapshot name=_parquet_postgresql_compression_type__GZIP__postgresql_engine_to_parquet_file_to_postgresql_engine_I_check_the_data_on_the_table_datetime snapshot_value=""" {"datetime":"2106-02-07 06:28:15","toTypeName(datetime)":"DateTime"} {"datetime":"2106-02-07 06:28:16","toTypeName(datetime)":"DateTime"} {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"2006-11-27 02:50:49","toTypeName(datetime)":"DateTime"} {"datetime":"2015-06-29 09:43:07","toTypeName(datetime)":"DateTime"} {"datetime":"2011-09-30 02:34:30","toTypeName(datetime)":"DateTime"} {"datetime":"2019-04-27 02:01:34","toTypeName(datetime)":"DateTime"} {"datetime":"2017-02-11 09:09:25","toTypeName(datetime)":"DateTime"} {"datetime":"2009-06-14 11:12:39","toTypeName(datetime)":"DateTime"} {"datetime":"2020-01-24 15:10:50","toTypeName(datetime)":"DateTime"} """, actual_value=""" {"datetime":"2106-02-07 06:28:15","toTypeName(datetime)":"DateTime"} {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"2006-11-27 02:50:49","toTypeName(datetime)":"DateTime"} {"datetime":"2015-06-29 09:43:07","toTypeName(datetime)":"DateTime"} {"datetime":"2011-09-30 02:34:30","toTypeName(datetime)":"DateTime"} {"datetime":"2019-04-27 02:01:34","toTypeName(datetime)":"DateTime"} {"datetime":"2017-02-11 09:09:25","toTypeName(datetime)":"DateTime"} {"datetime":"2009-06-14 11:12:39","toTypeName(datetime)":"DateTime"} {"datetime":"2020-01-24 15:10:50","toTypeName(datetime)":"DateTime"} """, diff=""" --- /home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/snapshots/common.py.postgresql engine to parquet file to postgresql engine.snapshot +++ @@ -1,6 +1,6 @@ {"datetime":"2106-02-07 06:28:15","toTypeName(datetime)":"DateTime"} -{"datetime":"2106-02-07 06:28:16","toTypeName(datetime)":"DateTime"} +{"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"2006-11-27 02:50:49","toTypeName(datetime)":"DateTime"} {"datetime":"2015-06-29 09:43:07","toTypeName(datetime)":"DateTime"} """) assert that(snapshot_result), error() ^ is False Where File '/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py', line 898 in 'execute_query' 890\| with values() as that: 891\| snapshot_result = snapshot( 892\| "\n" + r.output.strip() + "\n", 893\| id=snapshot_id, 894\| name=snapshot_name, 895\| encoder=str, 896\| mode=snapshot.CHECK, 897\| ) 898\|> assert that(snapshot_result), error() |
/parquet/chunked array | XFail 17s 967ms Not supported | AssertionError Traceback (most recent call last): File "/usr/lib/python3.12/threading.py", line 1030, in _bootstrap self._bootstrap_inner() File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner self.run() File "/usr/lib/python3.12/threading.py", line 1010, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.12/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/chunked_array.py", line 30, in feature node.query( File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../helpers/cluster.py", line 1195, in query assert False, error(r.output) ^^^^^ AssertionError: Oops! Assertion failed The following assertion was not satisfied assert False, error(r.output) Description Error on processing query: Code: 33. DB::Exception: Error while reading Parquet data: NotImplemented: Nested data conversions not implemented for chunked array outputs: (in file/uri /var/lib/clickhouse/user_files/chunked_array_test_file.parquet): While executing ParquetBlockInputFormat: While executing File: data for INSERT was parsed from file. (CANNOT_READ_ALL_DATA) (version 25.3.6.10034.altinitystable (altinity build)) (query: INSERT INTO table_537d7561_6b9d_11f0_86a7_920006481f25 FROM INFILE '/var/lib/clickhouse/user_files/chunked_array_test_file.parquet' FORMAT Parquet ) Assertion values assert False, error(r.output) ^ is False Where File '/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../helpers/cluster.py', line 1195 in 'query' 1187\| assert message in r.output, error(r.output) 1188\| 1189\| if not ignore_exception: 1190\| if message is None or "Exception:" not in message: 1191\| with Then("check if output has exception") if steps else NullStep(): 1192\| if "Exception:" in r.output: 1193\| if raise_on_exception: 1194\| raise QueryRuntimeException(r.output) 1195\|> assert False, error(r.output) 1196\| 1197\| return r 1198\| |
/parquet/datatypes/float16 | XFail 512ms ClickHouse does not import FLOAT16 properly | AssertionError Traceback (most recent call last): File "/usr/lib/python3.12/threading.py", line 1030, in _bootstrap self._bootstrap_inner() File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner self.run() File "/usr/lib/python3.12/threading.py", line 1010, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.12/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/datatypes.py", line 929, in feature scenario() File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/datatypes.py", line 113, in float16 assert output == expected, error() ^^^^^^^^^^^^^^^^^^ AssertionError: Oops! Assertion failed The following assertion was not satisfied assert output == expected, error() Assertion values assert output == expected, error() ^ is '[-2,-1,0,1,2,3,4,5,6,7,8]' assert output == expected, error() ^ is '[-2,-1,0,1,2,3,4,5,6,7,8,9]' assert output == expected, error() ^ is = False @@ -1 +1 @@ -[-2,-1,0,1,2,3,4,5,6,7,8] +[-2,-1,0,1,2,3,4,5,6,7,8,9] assert output == expected, error() ^ is False Where File '/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/datatypes.py', line 113 in 'float16' 105\| ORDER BY tuple() AS SELECT floatfield FROM file('{import_file}', Parquet) 106\| """ 107\| ) 108\| 109\| with Then("I read the contents of the created table"): 110\| output = node.query( 111\| f"SELECT groupArray(round(*)) FROM {table_name} FORMAT TSV" 112\| ).output 113\|> assert output == expected, error() 114\| 115\| finally: 116\| with Finally("I drop the table"): |
/parquet/datatypes/large string map | XFail 7s 621ms Will fail until the, https://github.com/apache/arrow/pull/35825, gets merged. | AssertionError Traceback (most recent call last): File "/usr/lib/python3.12/threading.py", line 1030, in _bootstrap self._bootstrap_inner() File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner self.run() File "/usr/lib/python3.12/threading.py", line 1010, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.12/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/datatypes.py", line 929, in feature scenario() File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/datatypes.py", line 801, in large_string_map import_export(snapshot_name="large_string_map_structure", import_file=import_file) File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/outline.py", line 36, in import_export node.query( File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../helpers/cluster.py", line 1195, in query assert False, error(r.output) ^^^^^ AssertionError: Oops! Assertion failed The following assertion was not satisfied assert False, error(r.output) Description Received exception from server (version 25.3.6): Code: 33. DB::Exception: Received from localhost:9000. DB::Exception: Error while reading Parquet data: NotImplemented: Nested data conversions not implemented for chunked array outputs: (in file/uri /var/lib/clickhouse/user_files/arrow/large_string_map.brotli.parquet): While executing ParquetBlockInputFormat: While executing File. (CANNOT_READ_ALL_DATA) (query: CREATE TABLE table_94eb1c9a_6b9d_11f0_a3c0_920006481f25 ENGINE = MergeTree ORDER BY tuple() AS SELECT * FROM file('arrow/large_string_map.brotli.parquet', Parquet) LIMIT 100 FORMAT TabSeparated ) Assertion values assert False, error(r.output) ^ is False Where File '/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../helpers/cluster.py', line 1195 in 'query' 1187\| assert message in r.output, error(r.output) 1188\| 1189\| if not ignore_exception: 1190\| if message is None or "Exception:" not in message: 1191\| with Then("check if output has exception") if steps else NullStep(): 1192\| if "Exception:" in r.output: 1193\| if raise_on_exception: 1194\| raise QueryRuntimeException(r.output) 1195\|> assert False, error(r.output) 1196\| 1197\| return r 1198\| |
Test Name | Result | Duration |
---|---|---|
/parquet | OK | 38m 12s |
/parquet/file | OK | 22m 32s |
/parquet/query | OK | 30m 16s |
/parquet/list in multiple chunks | OK | 21s 961ms |
/parquet/url | OK | 23m 33s |
/parquet/query/compression type | OK | 30m 16s |
/parquet/file/engine | OK | 22m 32s |
/parquet/file/function | OK | 10m 55s |
/parquet/query/compression type/=NONE | OK | 30m 16s |
/parquet/query/compression type/=GZIP | OK | 30m 16s |
/parquet/query/compression type/=LZ4 | OK | 30m 14s |
/parquet/file/engine/insert into engine | OK | 13m 51s |
/parquet/file/function/insert into function manual cast types | OK | 10m 37s |
/parquet/file/engine/select from engine | OK | 6m 16s |
/parquet/query/compression type/=NONE /insert into memory table from file | OK | 6m 5s |
/parquet/file/function/insert into function auto cast types | OK | 10m 55s |
/parquet/file/function/select from function manual cast types | OK | 6m 45s |
/parquet/file/engine/engine to file to engine | OK | 19m 12s |
/parquet/query/compression type/=GZIP /insert into memory table from file | OK | 6m 7s |
/parquet/file/engine/insert into engine from file | OK | 13m 26s |
/parquet/file/function/select from function auto cast types | OK | 6m 12s |
/parquet/file/engine/engine select output to file | OK | 22m 32s |
/parquet/query/compression type/=LZ4 /insert into memory table from file | OK | 6m 5s |
/parquet/url/engine | OK | 22m 52s |
/parquet/url/function | OK | 11m 27s |
/parquet/url/engine/insert into engine | OK | 14m 9s |
/parquet/url/function/insert into function | OK | 10m 35s |
/parquet/url/engine/select from engine | OK | 6m 13s |
/parquet/url/engine/engine to file to engine | OK | 19m 23s |
/parquet/url/engine/insert into engine from file | OK | 18m 33s |
/parquet/url/function/select from function manual cast types | OK | 11m 27s |
/parquet/url/engine/engine select output to file | OK | 22m 52s |
/parquet/url/function/select from function auto cast types | OK | 10m 21s |
/parquet/mysql | OK | 37s 901ms |
/parquet/mysql/compression type | OK | 37s 840ms |
/parquet/mysql/compression type/=NONE | OK | 35s 900ms |
/parquet/mysql/compression type/=NONE /mysql engine to parquet file to mysql engine | OK | 21s 127ms |
/parquet/mysql/compression type/=GZIP | OK | 36s 801ms |
/parquet/mysql/compression type/=GZIP /mysql engine to parquet file to mysql engine | OK | 21s 341ms |
/parquet/mysql/compression type/=LZ4 | OK | 37s 808ms |
/parquet/mysql/compression type/=LZ4 /mysql engine to parquet file to mysql engine | OK | 22s 275ms |
/parquet/mysql/compression type/=NONE /mysql function to parquet file to mysql function | OK | 14s 725ms |
/parquet/mysql/compression type/=GZIP /mysql function to parquet file to mysql function | OK | 15s 418ms |
/parquet/mysql/compression type/=LZ4 /mysql function to parquet file to mysql function | OK | 15s 482ms |
/parquet/postgresql | OK | 31s 958ms |
/parquet/postgresql/compression type | OK | 31s 906ms |
/parquet/postgresql/compression type/=NONE | OK | 31s 306ms |
/parquet/postgresql/compression type/=GZIP | OK | 31s 802ms |
/parquet/postgresql/compression type/=LZ4 | OK | 31s 576ms |
/parquet/postgresql/compression type/=NONE /postgresql engine to parquet file to postgresql engine | XFail | 15s 8ms |
/parquet/postgresql/compression type/=LZ4 /postgresql engine to parquet file to postgresql engine | XFail | 15s 302ms |
/parquet/postgresql/compression type/=GZIP /postgresql engine to parquet file to postgresql engine | XFail | 14s 999ms |
/parquet/postgresql/compression type/=NONE /postgresql function to parquet file to postgresql function | OK | 16s 193ms |
/parquet/postgresql/compression type/=GZIP /postgresql function to parquet file to postgresql function | OK | 16s 668ms |
/parquet/postgresql/compression type/=LZ4 /postgresql function to parquet file to postgresql function | OK | 16s 185ms |
/parquet/remote | OK | 14m 53s |
/parquet/remote/compression type | OK | 14m 53s |
/parquet/remote/compression type/=NONE | OK | 14m 52s |
/parquet/remote/compression type/=GZIP | OK | 14m 53s |
/parquet/remote/compression type/=LZ4 | OK | 14m 52s |
/parquet/remote/compression type/=NONE /outline | OK | 14m 52s |
/parquet/remote/compression type/=GZIP /outline | OK | 14m 53s |
/parquet/remote/compression type/=LZ4 /outline | OK | 14m 52s |
/parquet/remote/compression type/=NONE /outline/insert into function | OK | 5m 44s |
/parquet/remote/compression type/=GZIP /outline/insert into function | OK | 5m 46s |
/parquet/remote/compression type/=LZ4 /outline/insert into function | OK | 5m 47s |
/parquet/query/compression type/=NONE /insert into mergetree table from file | OK | 4m 8s |
/parquet/query/compression type/=LZ4 /insert into mergetree table from file | OK | 4m 9s |
/parquet/query/compression type/=GZIP /insert into mergetree table from file | OK | 4m 10s |
/parquet/remote/compression type/=NONE /outline/select from function | OK | 9m 7s |
/parquet/remote/compression type/=GZIP /outline/select from function | OK | 9m 6s |
/parquet/remote/compression type/=LZ4 /outline/select from function | OK | 9m 5s |
/parquet/query/compression type/=NONE /insert into replicated mergetree table from file | OK | 3m 1s |
/parquet/query/compression type/=LZ4 /insert into replicated mergetree table from file | OK | 3m 1s |
/parquet/query/compression type/=GZIP /insert into replicated mergetree table from file | OK | 3m 2s |
/parquet/query/compression type/=NONE /insert into distributed table from file | OK | 2m 24s |
/parquet/query/compression type/=LZ4 /insert into distributed table from file | OK | 2m 24s |
/parquet/query/compression type/=GZIP /insert into distributed table from file | OK | 2m 21s |
/parquet/query/compression type/=LZ4 /select from memory table into file | OK | 4m 5s |
/parquet/query/compression type/=NONE /select from memory table into file | OK | 4m 5s |
/parquet/query/compression type/=GZIP /select from memory table into file | OK | 4m 4s |
/parquet/chunked array | XFail | 17s 967ms |
/parquet/broken | OK | 381ms |
/parquet/broken/file | Skip | 12ms |
/parquet/broken/read broken bigint | Skip | 15ms |
/parquet/broken/read broken date | Skip | 10ms |
/parquet/broken/read broken int | Skip | 11ms |
/parquet/broken/read broken smallint | Skip | 58ms |
/parquet/broken/read broken timestamp ms | Skip | 10ms |
/parquet/broken/read broken timestamp us | Skip | 11ms |
/parquet/broken/read broken tinyint | Skip | 11ms |
/parquet/broken/read broken ubigint | Skip | 19ms |
/parquet/broken/read broken uint | Skip | 18ms |
/parquet/broken/read broken usmallint | Skip | 16ms |
/parquet/broken/read broken utinyint | Skip | 15ms |
/parquet/broken/string | Skip | 49ms |
/parquet/encoding | OK | 12s 932ms |
/parquet/encoding/deltabytearray1 | OK | 2s 217ms |
/parquet/encoding/deltabytearray2 | OK | 1s 682ms |
/parquet/encoding/deltalengthbytearray | OK | 1s 752ms |
/parquet/encoding/dictionary | OK | 1s 585ms |
/parquet/encoding/plain | OK | 1s 754ms |
/parquet/encoding/plainrlesnappy | OK | 2s 230ms |
/parquet/encoding/rleboolean | OK | 1s 649ms |
/parquet/compression | OK | 33s 634ms |
/parquet/compression/arrow snappy | OK | 1s 503ms |
/parquet/compression/brotli | OK | 1s 680ms |
/parquet/compression/gzippages | OK | 3s 219ms |
/parquet/compression/largegzip | OK | 1s 742ms |
/parquet/compression/lz4 hadoop | OK | 1s 781ms |
/parquet/compression/lz4 hadoop large | OK | 1s 525ms |
/parquet/compression/lz4 non hadoop | OK | 1s 561ms |
/parquet/compression/lz4 raw | OK | 1s 512ms |
/parquet/compression/lz4 raw large | OK | 1s 569ms |
/parquet/compression/lz4pages | OK | 3s 32ms |
/parquet/compression/nonepages | OK | 3s 251ms |
/parquet/compression/snappypages | OK | 3s 162ms |
/parquet/compression/snappyplain | OK | 1s 502ms |
/parquet/compression/snappyrle | OK | 1s 763ms |
/parquet/compression/zstd | OK | 1s 518ms |
/parquet/compression/zstdpages | OK | 3s 225ms |
/parquet/datatypes | OK | 2m 16s |
/parquet/datatypes/arrowtimestamp | OK | 1s 409ms |
/parquet/datatypes/arrowtimestampms | OK | 1s 509ms |
/parquet/datatypes/binary | OK | 1s 508ms |
/parquet/datatypes/binary string | OK | 1s 689ms |
/parquet/datatypes/blob | OK | 1s 592ms |
/parquet/datatypes/boolean | OK | 1s 647ms |
/parquet/datatypes/byte array | OK | 1s 454ms |
/parquet/datatypes/columnname | OK | 1s 616ms |
/parquet/datatypes/columnwithnull | OK | 1s 648ms |
/parquet/datatypes/columnwithnull2 | OK | 1s 636ms |
/parquet/datatypes/date | OK | 1s 397ms |
/parquet/datatypes/decimal with filter | OK | 1s 913ms |
/parquet/datatypes/decimalvariousfilters | OK | 1s 404ms |
/parquet/datatypes/decimalwithfilter2 | OK | 1s 754ms |
/parquet/datatypes/enum | OK | 1s 896ms |
/parquet/datatypes/enum2 | OK | 1s 840ms |
/parquet/datatypes/fixed length decimal | OK | 1s 473ms |
/parquet/datatypes/fixed length decimal legacy | OK | 1s 528ms |
/parquet/datatypes/fixedstring | OK | 1s 756ms |
/parquet/datatypes/float16 | XFail | 512ms |
/parquet/datatypes/h2oai | OK | 1s 865ms |
/parquet/datatypes/hive | OK | 3s 80ms |
/parquet/datatypes/int32 | OK | 1s 732ms |
/parquet/datatypes/int32 decimal | OK | 1s 567ms |
/parquet/datatypes/int64 | OK | 1s 799ms |
/parquet/datatypes/int64 decimal | OK | 1s 467ms |
/parquet/datatypes/json | OK | 1s 844ms |
/parquet/datatypes/large string map | XFail | 7s 621ms |
/parquet/datatypes/largedouble | OK | 1s 877ms |
/parquet/datatypes/manydatatypes | OK | 1s 488ms |
/parquet/datatypes/manydatatypes2 | OK | 2s 271ms |
/parquet/datatypes/maps | OK | 1s 566ms |
/parquet/datatypes/nameswithemoji | OK | 1s 689ms |
/parquet/datatypes/nandouble | OK | 1s 600ms |
/parquet/datatypes/negativeint64 | OK | 2s 578ms |
/parquet/datatypes/nullbyte | OK | 1s 585ms |
/parquet/datatypes/nullbytemultiple | OK | 1s 688ms |
/parquet/datatypes/nullsinid | OK | 1s 472ms |
/parquet/datatypes/pandasdecimal | OK | 1s 523ms |
/parquet/datatypes/pandasdecimaldate | OK | 1s 737ms |
/parquet/datatypes/parquetgo | OK | 1s 517ms |
/parquet/datatypes/selectdatewithfilter | OK | 37s 915ms |
/parquet/datatypes/singlenull | OK | 1s 377ms |
/parquet/datatypes/sparkv21 | OK | 1s 301ms |
/parquet/datatypes/sparkv22 | OK | 1s 763ms |
/parquet/datatypes/statdecimal | OK | 1s 73ms |
/parquet/datatypes/string | OK | 1s 911ms |
/parquet/datatypes/string int list inconsistent offset multiple batches | OK | 6s 642ms |
/parquet/datatypes/stringtypes | OK | 951ms |
/parquet/datatypes/struct | OK | 1s 47ms |
/parquet/datatypes/supporteduuid | OK | 1s 39ms |
/parquet/datatypes/timestamp1 | OK | 851ms |
/parquet/datatypes/timestamp2 | OK | 964ms |
/parquet/datatypes/timezone | OK | 1s 624ms |
/parquet/datatypes/unsigned | OK | 2s 678ms |
/parquet/query/compression type/=NONE /select from mergetree table into file | OK | 2m 38s |
/parquet/query/compression type/=LZ4 /select from mergetree table into file | OK | 2m 38s |
/parquet/query/compression type/=GZIP /select from mergetree table into file | OK | 2m 39s |
/parquet/datatypes/unsupportednull | OK | 153ms |
/parquet/complex | OK | 24s 154ms |
/parquet/complex/arraystring | OK | 844ms |
/parquet/complex/big tuple with nulls | OK | 842ms |
/parquet/complex/bytearraydictionary | OK | 986ms |
/parquet/complex/complex null | OK | 910ms |
/parquet/complex/lagemap | OK | 2s 70ms |
/parquet/complex/largenestedarray | OK | 2s 118ms |
/parquet/complex/largestruct | OK | 1s 81ms |
/parquet/complex/largestruct2 | OK | 1s 328ms |
/parquet/complex/largestruct3 | OK | 1s 173ms |
/parquet/complex/list | OK | 1s 50ms |
/parquet/complex/nested array | OK | 1s 103ms |
/parquet/complex/nested map | OK | 1s 132ms |
/parquet/complex/nestedallcomplex | OK | 1s 205ms |
/parquet/complex/nestedarray2 | OK | 1s 126ms |
/parquet/complex/nestedstruct | OK | 1s 318ms |
/parquet/complex/nestedstruct2 | OK | 1s 75ms |
/parquet/complex/nestedstruct3 | OK | 1s 115ms |
/parquet/complex/nestedstruct4 | OK | 1s 432ms |
/parquet/complex/tupleofnulls | OK | 1s 100ms |
/parquet/complex/tuplewithdatetime | OK | 1s 91ms |
/parquet/cache | OK | 2s 201ms |
/parquet/cache/cache1 | OK | 1s 102ms |
/parquet/cache/cache2 | OK | 1s 85ms |
/parquet/glob | OK | 26s 315ms |
/parquet/glob/fastparquet globs | OK | 2s 447ms |
/parquet/glob/glob1 | OK | 1s 680ms |
/parquet/glob/glob2 | OK | 1s 926ms |
/parquet/glob/glob with multiple elements | OK | 440ms |
/parquet/glob/million extensions | OK | 19s 804ms |
/parquet/rowgroups | OK | 2s 212ms |
/parquet/rowgroups/manyrowgroups | OK | 1s 79ms |
/parquet/rowgroups/manyrowgroups2 | OK | 1s 124ms |
/parquet/encrypted | Skip | 7ms |
/parquet/fastparquet | OK | 40ms |
/parquet/fastparquet/airlines | Skip | 3ms |
/parquet/fastparquet/baz | Skip | 4ms |
/parquet/fastparquet/empty date | Skip | 3ms |
/parquet/fastparquet/evo | Skip | 3ms |
/parquet/fastparquet/fastparquet | Skip | 4ms |
/parquet/read and write | OK | 12m 54s |
/parquet/read and write/read and write parquet file | OK | 12m 54s |
/parquet/query/compression type/=NONE /select from replicated mergetree table into file | OK | 2m 17s |
/parquet/query/compression type/=LZ4 /select from replicated mergetree table into file | OK | 2m 15s |
/parquet/query/compression type/=GZIP /select from replicated mergetree table into file | OK | 2m 17s |
/parquet/column related errors | OK | 1s 550ms |
/parquet/column related errors/check error with 500 columns | OK | 1s 544ms |
/parquet/multi chunk upload | Skip | 11ms |
/parquet/query/compression type/=LZ4 /select from distributed table into file | OK | 2m 34s |
/parquet/query/compression type/=NONE /select from distributed table into file | OK | 2m 34s |
/parquet/query/compression type/=GZIP /select from distributed table into file | OK | 2m 33s |
/parquet/query/compression type/=LZ4 /select from mat view into file | OK | 2m 10s |
/parquet/query/compression type/=GZIP /select from mat view into file | OK | 2m 12s |
/parquet/query/compression type/=NONE /select from mat view into file | OK | 2m 12s |
/parquet/query/compression type/=LZ4 /insert into table with projection from file | OK | 48s 987ms |
/parquet/query/compression type/=NONE /insert into table with projection from file | OK | 47s 99ms |
/parquet/query/compression type/=GZIP /insert into table with projection from file | OK | 47s 99ms |
Generated by TestFlows Open-Source Test Framework v2.0.250110.1002922