Copyright 2025, Altinity Inc.. All Rights Reserved. All information contained herein is, and remains the property of Altinity Inc.. Any dissemination of this information or reproduction of this material is strictly forbidden unless prior written permission is obtained from Altinity Inc..
Date | Apr 04, 2025 13:29 |
Duration | 1h 5m |
Framework | TestFlows 2.0.250110.1002922 |
Test artifacts can be found at https://altinity-build-artifacts.s3.amazonaws.com/index.html#712/e61e67ac3bb5ac74ba8cf868764defd70e08f8d8/regression/x86_64/with_analyzer/zookeeper/without_thread_fuzzer/parquet/
project | Altinity/ClickHouse |
project.id | 159717931 |
package | https://s3.amazonaws.com/altinity-build-artifacts/PRs/712/e61e67ac3bb5ac74ba8cf868764defd70e08f8d8/package_release/clickhouse-common-static_24.12.2.20230.altinityantalya_amd64.deb |
version | 24.12.2.20230.altinityantalya |
user.name | ianton-ru |
repository | https://github.com/Altinity/clickhouse-regression |
commit.hash | bd31e738c0cedaca253d15a05ed245c41b6e0b6a |
job.name | Parquet |
job.retry | 1 |
job.url | https://github.com/Altinity/ClickHouse/actions/runs/14264141153 |
arch | x86_64 |
local | True |
clickhouse_version | None |
clickhouse_path | https://s3.amazonaws.com/altinity-build-artifacts/PRs/712/e61e67ac3bb5ac74ba8cf868764defd70e08f8d8/package_release/clickhouse-common-static_24.12.2.20230.altinityantalya_amd64.deb |
as_binary | False |
base_os | None |
keeper_path | None |
zookeeper_version | None |
use_keeper | False |
stress | False |
collect_service_logs | True |
thread_fuzzer | False |
with_analyzer | True |
reuse_env | False |
storages | None |
minio_uri | Secret(name='minio_uri') |
minio_root_user | Secret(name='minio_root_user') |
minio_root_password | Secret(name='minio_root_password') |
aws_s3_bucket | None |
aws_s3_region | Secret(name='aws_s3_region') |
aws_s3_key_id | Secret(name='aws_s3_key_id') |
aws_s3_access_key | Secret(name='aws_s3_access_key') |
gcs_uri | None |
gcs_key_id | None |
gcs_key_secret | None |
azure_account_name | None |
azure_storage_key | None |
azure_container | None |
native_parquet_reader | False |
stress_bloom | False |
Units | Skip | OK | Fail | XFail | |
---|---|---|---|---|---|
Modules | |||||
Suites | |||||
Features | |||||
Scenarios | |||||
Checks | |||||
Examples | |||||
Steps |
Test Name | Result | Message |
---|---|---|
/parquet/postgresql/compression type/=GZIP /postgresql engine to parquet file to postgresql engine | XFail 25s 146ms This fails because of the difference in snapshot values. We used to capture the datetime value `0` be converted as 2106-02-07 06:28:16 instead of the correct 1970-01-01 01:00:00. But when steps are repeated manually, we can not reproduce it | AssertionError Traceback (most recent call last): File "/usr/lib/python3.12/threading.py", line 1030, in _bootstrap self._bootstrap_inner() File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner self.run() File "/usr/lib/python3.12/threading.py", line 1010, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.12/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py", line 827, in execute_query_step execute_query( File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py", line 897, in execute_query assert that(snapshot_result), error() ^^^^^^^^^^^^^^^^^^^^^ AssertionError: Oops! Assertion failed The following assertion was not satisfied assert that(snapshot_result), error() Assertion values assert that(snapshot_result), error() ^ is = SnapshotError( filename=/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/snapshots/common.py.postgresql engine to parquet file to postgresql engine.snapshot name=_parquet_postgresql_compression_type__GZIP__postgresql_engine_to_parquet_file_to_postgresql_engine_I_check_the_data_on_the_table_datetime snapshot_value=""" {"datetime":"2106-02-07 06:28:15","toTypeName(datetime)":"DateTime"} {"datetime":"2106-02-07 06:28:16","toTypeName(datetime)":"DateTime"} {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"2006-11-27 02:50:49","toTypeName(datetime)":"DateTime"} {"datetime":"2015-06-29 09:43:07","toTypeName(datetime)":"DateTime"} {"datetime":"2011-09-30 02:34:30","toTypeName(datetime)":"DateTime"} {"datetime":"2019-04-27 02:01:34","toTypeName(datetime)":"DateTime"} {"datetime":"2017-02-11 09:09:25","toTypeName(datetime)":"DateTime"} {"datetime":"2009-06-14 11:12:39","toTypeName(datetime)":"DateTime"} {"datetime":"2020-01-24 15:10:50","toTypeName(datetime)":"DateTime"} """, actual_value=""" {"datetime":"2106-02-07 06:28:15","toTypeName(datetime)":"DateTime"} {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"2006-11-27 02:50:49","toTypeName(datetime)":"DateTime"} {"datetime":"2015-06-29 09:43:07","toTypeName(datetime)":"DateTime"} {"datetime":"2011-09-30 02:34:30","toTypeName(datetime)":"DateTime"} {"datetime":"2019-04-27 02:01:34","toTypeName(datetime)":"DateTime"} {"datetime":"2017-02-11 09:09:25","toTypeName(datetime)":"DateTime"} {"datetime":"2009-06-14 11:12:39","toTypeName(datetime)":"DateTime"} {"datetime":"2020-01-24 15:10:50","toTypeName(datetime)":"DateTime"} """, diff=""" --- /home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/snapshots/common.py.postgresql engine to parquet file to postgresql engine.snapshot +++ @@ -1,6 +1,6 @@ {"datetime":"2106-02-07 06:28:15","toTypeName(datetime)":"DateTime"} -{"datetime":"2106-02-07 06:28:16","toTypeName(datetime)":"DateTime"} +{"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"2006-11-27 02:50:49","toTypeName(datetime)":"DateTime"} {"datetime":"2015-06-29 09:43:07","toTypeName(datetime)":"DateTime"} """) assert that(snapshot_result), error() ^ is False Where File '/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py', line 897 in 'execute_query' 889\| with values() as that: 890\| snapshot_result = snapshot( 891\| "\n" + r.output.strip() + "\n", 892\| id=snapshot_id, 893\| name=snapshot_name, 894\| encoder=str, 895\| mode=snapshot.CHECK, 896\| ) 897\|> assert that(snapshot_result), error() |
/parquet/postgresql/compression type/=NONE /postgresql engine to parquet file to postgresql engine | XFail 24s 175ms This fails because of the difference in snapshot values. We used to capture the datetime value `0` be converted as 2106-02-07 06:28:16 instead of the correct 1970-01-01 01:00:00. But when steps are repeated manually, we can not reproduce it | AssertionError Traceback (most recent call last): File "/usr/lib/python3.12/threading.py", line 1030, in _bootstrap self._bootstrap_inner() File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner self.run() File "/usr/lib/python3.12/threading.py", line 1010, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.12/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py", line 827, in execute_query_step execute_query( File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py", line 897, in execute_query assert that(snapshot_result), error() ^^^^^^^^^^^^^^^^^^^^^ AssertionError: Oops! Assertion failed The following assertion was not satisfied assert that(snapshot_result), error() Assertion values assert that(snapshot_result), error() ^ is = SnapshotError( filename=/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/snapshots/common.py.postgresql engine to parquet file to postgresql engine.snapshot name=_parquet_postgresql_compression_type__NONE__postgresql_engine_to_parquet_file_to_postgresql_engine_I_check_the_data_on_the_table_datetime snapshot_value=""" {"datetime":"2106-02-07 06:28:15","toTypeName(datetime)":"DateTime"} {"datetime":"2106-02-07 06:28:16","toTypeName(datetime)":"DateTime"} {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"2006-11-27 02:50:49","toTypeName(datetime)":"DateTime"} {"datetime":"2015-06-29 09:43:07","toTypeName(datetime)":"DateTime"} {"datetime":"2011-09-30 02:34:30","toTypeName(datetime)":"DateTime"} {"datetime":"2019-04-27 02:01:34","toTypeName(datetime)":"DateTime"} {"datetime":"2017-02-11 09:09:25","toTypeName(datetime)":"DateTime"} {"datetime":"2009-06-14 11:12:39","toTypeName(datetime)":"DateTime"} {"datetime":"2020-01-24 15:10:50","toTypeName(datetime)":"DateTime"} """, actual_value=""" {"datetime":"2106-02-07 06:28:15","toTypeName(datetime)":"DateTime"} {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"2006-11-27 02:50:49","toTypeName(datetime)":"DateTime"} {"datetime":"2015-06-29 09:43:07","toTypeName(datetime)":"DateTime"} {"datetime":"2011-09-30 02:34:30","toTypeName(datetime)":"DateTime"} {"datetime":"2019-04-27 02:01:34","toTypeName(datetime)":"DateTime"} {"datetime":"2017-02-11 09:09:25","toTypeName(datetime)":"DateTime"} {"datetime":"2009-06-14 11:12:39","toTypeName(datetime)":"DateTime"} {"datetime":"2020-01-24 15:10:50","toTypeName(datetime)":"DateTime"} """, diff=""" --- /home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/snapshots/common.py.postgresql engine to parquet file to postgresql engine.snapshot +++ @@ -1,6 +1,6 @@ {"datetime":"2106-02-07 06:28:15","toTypeName(datetime)":"DateTime"} -{"datetime":"2106-02-07 06:28:16","toTypeName(datetime)":"DateTime"} +{"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"2006-11-27 02:50:49","toTypeName(datetime)":"DateTime"} {"datetime":"2015-06-29 09:43:07","toTypeName(datetime)":"DateTime"} """) assert that(snapshot_result), error() ^ is False Where File '/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py', line 897 in 'execute_query' 889\| with values() as that: 890\| snapshot_result = snapshot( 891\| "\n" + r.output.strip() + "\n", 892\| id=snapshot_id, 893\| name=snapshot_name, 894\| encoder=str, 895\| mode=snapshot.CHECK, 896\| ) 897\|> assert that(snapshot_result), error() |
/parquet/postgresql/compression type/=LZ4 /postgresql engine to parquet file to postgresql engine | XFail 24s 556ms This fails because of the difference in snapshot values. We used to capture the datetime value `0` be converted as 2106-02-07 06:28:16 instead of the correct 1970-01-01 01:00:00. But when steps are repeated manually, we can not reproduce it | AssertionError Traceback (most recent call last): File "/usr/lib/python3.12/threading.py", line 1030, in _bootstrap self._bootstrap_inner() File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner self.run() File "/usr/lib/python3.12/threading.py", line 1010, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.12/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py", line 827, in execute_query_step execute_query( File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py", line 897, in execute_query assert that(snapshot_result), error() ^^^^^^^^^^^^^^^^^^^^^ AssertionError: Oops! Assertion failed The following assertion was not satisfied assert that(snapshot_result), error() Assertion values assert that(snapshot_result), error() ^ is = SnapshotError( filename=/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/snapshots/common.py.postgresql engine to parquet file to postgresql engine.snapshot name=_parquet_postgresql_compression_type__LZ4__postgresql_engine_to_parquet_file_to_postgresql_engine_I_check_the_data_on_the_table_datetime snapshot_value=""" {"datetime":"2106-02-07 06:28:15","toTypeName(datetime)":"DateTime"} {"datetime":"2106-02-07 06:28:16","toTypeName(datetime)":"DateTime"} {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"2006-11-27 02:50:49","toTypeName(datetime)":"DateTime"} {"datetime":"2015-06-29 09:43:07","toTypeName(datetime)":"DateTime"} {"datetime":"2011-09-30 02:34:30","toTypeName(datetime)":"DateTime"} {"datetime":"2019-04-27 02:01:34","toTypeName(datetime)":"DateTime"} {"datetime":"2017-02-11 09:09:25","toTypeName(datetime)":"DateTime"} {"datetime":"2009-06-14 11:12:39","toTypeName(datetime)":"DateTime"} {"datetime":"2020-01-24 15:10:50","toTypeName(datetime)":"DateTime"} """, actual_value=""" {"datetime":"2106-02-07 06:28:15","toTypeName(datetime)":"DateTime"} {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"2006-11-27 02:50:49","toTypeName(datetime)":"DateTime"} {"datetime":"2015-06-29 09:43:07","toTypeName(datetime)":"DateTime"} {"datetime":"2011-09-30 02:34:30","toTypeName(datetime)":"DateTime"} {"datetime":"2019-04-27 02:01:34","toTypeName(datetime)":"DateTime"} {"datetime":"2017-02-11 09:09:25","toTypeName(datetime)":"DateTime"} {"datetime":"2009-06-14 11:12:39","toTypeName(datetime)":"DateTime"} {"datetime":"2020-01-24 15:10:50","toTypeName(datetime)":"DateTime"} """, diff=""" --- /home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/snapshots/common.py.postgresql engine to parquet file to postgresql engine.snapshot +++ @@ -1,6 +1,6 @@ {"datetime":"2106-02-07 06:28:15","toTypeName(datetime)":"DateTime"} -{"datetime":"2106-02-07 06:28:16","toTypeName(datetime)":"DateTime"} +{"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"2006-11-27 02:50:49","toTypeName(datetime)":"DateTime"} {"datetime":"2015-06-29 09:43:07","toTypeName(datetime)":"DateTime"} """) assert that(snapshot_result), error() ^ is False Where File '/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py', line 897 in 'execute_query' 889\| with values() as that: 890\| snapshot_result = snapshot( 891\| "\n" + r.output.strip() + "\n", 892\| id=snapshot_id, 893\| name=snapshot_name, 894\| encoder=str, 895\| mode=snapshot.CHECK, 896\| ) 897\|> assert that(snapshot_result), error() |
/parquet/chunked array | XFail 31s 686ms Not supported | AssertionError Traceback (most recent call last): File "/usr/lib/python3.12/threading.py", line 1030, in _bootstrap self._bootstrap_inner() File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner self.run() File "/usr/lib/python3.12/threading.py", line 1010, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.12/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/chunked_array.py", line 30, in feature node.query( File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../helpers/cluster.py", line 1188, in query assert False, error(r.output) ^^^^^ AssertionError: Oops! Assertion failed The following assertion was not satisfied assert False, error(r.output) Description Error on processing query: Code: 33. DB::Exception: Error while reading Parquet data: NotImplemented: Nested data conversions not implemented for chunked array outputs: (in file/uri /var/lib/clickhouse/user_files/chunked_array_test_file.parquet): While executing ParquetBlockInputFormat: While executing File: data for INSERT was parsed from file. (CANNOT_READ_ALL_DATA) (version 24.12.2.20230.altinityantalya (altinity build)) (query: INSERT INTO table_cc72ded4_115d_11f0_aa0c_96000431c82d FROM INFILE '/var/lib/clickhouse/user_files/chunked_array_test_file.parquet' FORMAT Parquet ) Assertion values assert False, error(r.output) ^ is False Where File '/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../helpers/cluster.py', line 1188 in 'query' 1180\| assert message in r.output, error(r.output) 1181\| 1182\| if not ignore_exception: 1183\| if message is None or "Exception:" not in message: 1184\| with Then("check if output has exception") if steps else NullStep(): 1185\| if "Exception:" in r.output: 1186\| if raise_on_exception: 1187\| raise QueryRuntimeException(r.output) 1188\|> assert False, error(r.output) 1189\| 1190\| return r 1191\| |
/parquet/datatypes/float16 | XFail 918ms ClickHouse does not import FLOAT16 properly | AssertionError Traceback (most recent call last): File "/usr/lib/python3.12/threading.py", line 1030, in _bootstrap self._bootstrap_inner() File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner self.run() File "/usr/lib/python3.12/threading.py", line 1010, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.12/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/datatypes.py", line 929, in feature scenario() File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/datatypes.py", line 113, in float16 assert output == expected, error() ^^^^^^^^^^^^^^^^^^ AssertionError: Oops! Assertion failed The following assertion was not satisfied assert output == expected, error() Assertion values assert output == expected, error() ^ is '[-0,0,32,2052,32838,0,0,0,0,0,0]' assert output == expected, error() ^ is '[-2,-1,0,1,2,3,4,5,6,7,8,9]' assert output == expected, error() ^ is = False @@ -1 +1 @@ -[-0,0,32,2052,32838,0,0,0,0,0,0] +[-2,-1,0,1,2,3,4,5,6,7,8,9] assert output == expected, error() ^ is False Where File '/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/datatypes.py', line 113 in 'float16' 105\| ORDER BY tuple() AS SELECT floatfield FROM file('{import_file}', Parquet) 106\| """ 107\| ) 108\| 109\| with Then("I read the contents of the created table"): 110\| output = node.query( 111\| f"SELECT groupArray(round(*)) FROM {table_name} FORMAT TSV" 112\| ).output 113\|> assert output == expected, error() 114\| 115\| finally: 116\| with Finally("I drop the table"): |
/parquet/datatypes/large string map | XFail 11s 29ms Will fail until the, https://github.com/apache/arrow/pull/35825, gets merged. | AssertionError Traceback (most recent call last): File "/usr/lib/python3.12/threading.py", line 1030, in _bootstrap self._bootstrap_inner() File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner self.run() File "/usr/lib/python3.12/threading.py", line 1010, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.12/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/datatypes.py", line 929, in feature scenario() File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/datatypes.py", line 801, in large_string_map import_export(snapshot_name="large_string_map_structure", import_file=import_file) File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/outline.py", line 36, in import_export node.query( File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../helpers/cluster.py", line 1188, in query assert False, error(r.output) ^^^^^ AssertionError: Oops! Assertion failed The following assertion was not satisfied assert False, error(r.output) Description Received exception from server (version 24.12.2): Code: 33. DB::Exception: Received from localhost:9000. DB::Exception: Error while reading Parquet data: NotImplemented: Nested data conversions not implemented for chunked array outputs: (in file/uri /var/lib/clickhouse/user_files/arrow/large_string_map.brotli.parquet): While executing ParquetBlockInputFormat: While executing File. (CANNOT_READ_ALL_DATA) (query: CREATE TABLE table_4350a1f1_115e_11f0_8eda_96000431c82d ENGINE = MergeTree ORDER BY tuple() AS SELECT * FROM file('arrow/large_string_map.brotli.parquet', Parquet) LIMIT 100 FORMAT TabSeparated ) Assertion values assert False, error(r.output) ^ is False Where File '/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../helpers/cluster.py', line 1188 in 'query' 1180\| assert message in r.output, error(r.output) 1181\| 1182\| if not ignore_exception: 1183\| if message is None or "Exception:" not in message: 1184\| with Then("check if output has exception") if steps else NullStep(): 1185\| if "Exception:" in r.output: 1186\| if raise_on_exception: 1187\| raise QueryRuntimeException(r.output) 1188\|> assert False, error(r.output) 1189\| 1190\| return r 1191\| |
Test Name | Result | Duration |
---|---|---|
/parquet | OK | 1h 5m |
/parquet/file | OK | 41m 30s |
/parquet/file/engine | OK | 41m 30s |
/parquet/file/function | OK | 19m 43s |
/parquet/file/engine/insert into engine | OK | 25m 19s |
/parquet/query | OK | 54m 59s |
/parquet/query/compression type | OK | 54m 59s |
/parquet/file/engine/select from engine | OK | 11m 15s |
/parquet/file/function/insert into function manual cast types | OK | 19m 2s |
/parquet/query/compression type/=NONE | OK | 54m 59s |
/parquet/query/compression type/=NONE /insert into memory table from file | OK | 10m 58s |
/parquet/file/engine/engine to file to engine | OK | 35m 33s |
/parquet/file/function/insert into function auto cast types | OK | 19m 43s |
/parquet/file/engine/insert into engine from file | OK | 24m 34s |
/parquet/query/compression type/=GZIP | OK | 54m 58s |
/parquet/file/engine/engine select output to file | OK | 41m 30s |
/parquet/query/compression type/=LZ4 | OK | 54m 58s |
/parquet/file/function/select from function manual cast types | OK | 12m 10s |
/parquet/file/function/select from function auto cast types | OK | 11m 17s |
/parquet/list in multiple chunks | OK | 56s 351ms |
/parquet/url | OK | 42m 43s |
/parquet/query/compression type/=GZIP /insert into memory table from file | OK | 10m 56s |
/parquet/query/compression type/=LZ4 /insert into memory table from file | OK | 10m 52s |
/parquet/url/engine | OK | 41m 56s |
/parquet/url/function | OK | 20m 52s |
/parquet/url/engine/insert into engine | OK | 25m 58s |
/parquet/url/engine/select from engine | OK | 11m 20s |
/parquet/url/engine/engine to file to engine | OK | 35m 50s |
/parquet/url/function/insert into function | OK | 19m 1s |
/parquet/url/function/select from function manual cast types | OK | 20m 52s |
/parquet/url/engine/insert into engine from file | OK | 34m 20s |
/parquet/url/engine/engine select output to file | OK | 41m 56s |
/parquet/url/function/select from function auto cast types | OK | 18m 45s |
/parquet/mysql | OK | 1m 1s |
/parquet/mysql/compression type | OK | 1m 1s |
/parquet/mysql/compression type/=NONE | OK | 1m 1s |
/parquet/mysql/compression type/=NONE /mysql engine to parquet file to mysql engine | OK | 32s 922ms |
/parquet/mysql/compression type/=GZIP | OK | 1m 0s |
/parquet/mysql/compression type/=GZIP /mysql engine to parquet file to mysql engine | OK | 33s 714ms |
/parquet/mysql/compression type/=LZ4 | OK | 1m 0s |
/parquet/mysql/compression type/=LZ4 /mysql engine to parquet file to mysql engine | OK | 33s 738ms |
/parquet/mysql/compression type/=NONE /mysql function to parquet file to mysql function | OK | 28s 216ms |
/parquet/mysql/compression type/=GZIP /mysql function to parquet file to mysql function | OK | 26s 773ms |
/parquet/mysql/compression type/=LZ4 /mysql function to parquet file to mysql function | OK | 26s 700ms |
/parquet/postgresql | OK | 56s 384ms |
/parquet/postgresql/compression type | OK | 56s 286ms |
/parquet/postgresql/compression type/=NONE | OK | 54s 441ms |
/parquet/postgresql/compression type/=GZIP | OK | 56s 46ms |
/parquet/postgresql/compression type/=LZ4 | OK | 55s 296ms |
/parquet/postgresql/compression type/=GZIP /postgresql engine to parquet file to postgresql engine | XFail | 25s 146ms |
/parquet/postgresql/compression type/=NONE /postgresql engine to parquet file to postgresql engine | XFail | 24s 175ms |
/parquet/postgresql/compression type/=LZ4 /postgresql engine to parquet file to postgresql engine | XFail | 24s 556ms |
/parquet/postgresql/compression type/=NONE /postgresql function to parquet file to postgresql function | OK | 29s 986ms |
/parquet/postgresql/compression type/=LZ4 /postgresql function to parquet file to postgresql function | OK | 30s 390ms |
/parquet/postgresql/compression type/=GZIP /postgresql function to parquet file to postgresql function | OK | 30s 554ms |
/parquet/remote | OK | 27m 16s |
/parquet/remote/compression type | OK | 27m 16s |
/parquet/remote/compression type/=NONE | OK | 27m 16s |
/parquet/remote/compression type/=GZIP | OK | 27m 11s |
/parquet/remote/compression type/=LZ4 | OK | 27m 11s |
/parquet/remote/compression type/=NONE /outline | OK | 27m 16s |
/parquet/remote/compression type/=GZIP /outline | OK | 27m 11s |
/parquet/remote/compression type/=LZ4 /outline | OK | 27m 11s |
/parquet/remote/compression type/=NONE /outline/insert into function | OK | 10m 16s |
/parquet/remote/compression type/=GZIP /outline/insert into function | OK | 10m 15s |
/parquet/remote/compression type/=LZ4 /outline/insert into function | OK | 10m 14s |
/parquet/query/compression type/=LZ4 /insert into mergetree table from file | OK | 7m 38s |
/parquet/query/compression type/=GZIP /insert into mergetree table from file | OK | 7m 38s |
/parquet/query/compression type/=NONE /insert into mergetree table from file | OK | 7m 36s |
/parquet/remote/compression type/=LZ4 /outline/select from function | OK | 16m 56s |
/parquet/remote/compression type/=GZIP /outline/select from function | OK | 16m 55s |
/parquet/remote/compression type/=NONE /outline/select from function | OK | 16m 59s |
/parquet/query/compression type/=LZ4 /insert into replicated mergetree table from file | OK | 5m 42s |
/parquet/query/compression type/=NONE /insert into replicated mergetree table from file | OK | 5m 42s |
/parquet/query/compression type/=GZIP /insert into replicated mergetree table from file | OK | 5m 41s |
/parquet/query/compression type/=LZ4 /insert into distributed table from file | OK | 4m 31s |
/parquet/query/compression type/=GZIP /insert into distributed table from file | OK | 4m 31s |
/parquet/query/compression type/=NONE /insert into distributed table from file | OK | 4m 31s |
/parquet/query/compression type/=LZ4 /select from memory table into file | OK | 7m 48s |
/parquet/query/compression type/=GZIP /select from memory table into file | OK | 7m 49s |
/parquet/query/compression type/=NONE /select from memory table into file | OK | 7m 48s |
/parquet/chunked array | XFail | 31s 686ms |
/parquet/broken | OK | 766ms |
/parquet/broken/file | Skip | 20ms |
/parquet/broken/read broken bigint | Skip | 24ms |
/parquet/broken/read broken date | Skip | 27ms |
/parquet/broken/read broken int | Skip | 37ms |
/parquet/broken/read broken smallint | Skip | 13ms |
/parquet/broken/read broken timestamp ms | Skip | 22ms |
/parquet/broken/read broken timestamp us | Skip | 29ms |
/parquet/broken/read broken tinyint | Skip | 56ms |
/parquet/broken/read broken ubigint | Skip | 22ms |
/parquet/broken/read broken uint | Skip | 48ms |
/parquet/broken/read broken usmallint | Skip | 30ms |
/parquet/broken/read broken utinyint | Skip | 19ms |
/parquet/broken/string | Skip | 23ms |
/parquet/encoding | OK | 22s 491ms |
/parquet/encoding/deltabytearray1 | OK | 3s 607ms |
/parquet/encoding/deltabytearray2 | OK | 2s 824ms |
/parquet/encoding/deltalengthbytearray | OK | 2s 641ms |
/parquet/encoding/dictionary | OK | 2s 985ms |
/parquet/encoding/plain | OK | 3s 133ms |
/parquet/encoding/plainrlesnappy | OK | 4s 371ms |
/parquet/encoding/rleboolean | OK | 2s 782ms |
/parquet/compression | OK | 1m 1s |
/parquet/compression/arrow snappy | OK | 3s 217ms |
/parquet/compression/brotli | OK | 2s 653ms |
/parquet/compression/gzippages | OK | 5s 710ms |
/parquet/compression/largegzip | OK | 3s 214ms |
/parquet/compression/lz4 hadoop | OK | 2s 615ms |
/parquet/compression/lz4 hadoop large | OK | 2s 980ms |
/parquet/compression/lz4 non hadoop | OK | 2s 816ms |
/parquet/compression/lz4 raw | OK | 2s 616ms |
/parquet/compression/lz4 raw large | OK | 3s 67ms |
/parquet/compression/lz4pages | OK | 5s 795ms |
/parquet/compression/nonepages | OK | 5s 813ms |
/parquet/compression/snappypages | OK | 6s 23ms |
/parquet/compression/snappyplain | OK | 2s 695ms |
/parquet/compression/snappyrle | OK | 3s 178ms |
/parquet/compression/zstd | OK | 2s 807ms |
/parquet/compression/zstdpages | OK | 5s 764ms |
/parquet/datatypes | OK | 4m 36s |
/parquet/datatypes/arrowtimestamp | OK | 2s 794ms |
/parquet/datatypes/arrowtimestampms | OK | 2s 991ms |
/parquet/datatypes/binary | OK | 2s 863ms |
/parquet/datatypes/binary string | OK | 3s 28ms |
/parquet/datatypes/blob | OK | 2s 801ms |
/parquet/datatypes/boolean | OK | 3s 126ms |
/parquet/datatypes/byte array | OK | 3s 58ms |
/parquet/datatypes/columnname | OK | 2s 872ms |
/parquet/datatypes/columnwithnull | OK | 3s 417ms |
/parquet/datatypes/columnwithnull2 | OK | 2s 902ms |
/parquet/datatypes/date | OK | 3s 5ms |
/parquet/datatypes/decimal with filter | OK | 3s 351ms |
/parquet/datatypes/decimalvariousfilters | OK | 3s 199ms |
/parquet/datatypes/decimalwithfilter2 | OK | 2s 797ms |
/parquet/datatypes/enum | OK | 3s 708ms |
/parquet/datatypes/enum2 | OK | 3s 116ms |
/parquet/datatypes/fixed length decimal | OK | 2s 975ms |
/parquet/datatypes/fixed length decimal legacy | OK | 2s 920ms |
/parquet/datatypes/fixedstring | OK | 2s 921ms |
/parquet/datatypes/float16 | XFail | 918ms |
/parquet/datatypes/h2oai | OK | 3s 206ms |
/parquet/datatypes/hive | OK | 5s 791ms |
/parquet/datatypes/int32 | OK | 2s 844ms |
/parquet/datatypes/int32 decimal | OK | 2s 801ms |
/parquet/datatypes/int64 | OK | 2s 992ms |
/parquet/datatypes/int64 decimal | OK | 3s 303ms |
/parquet/datatypes/json | OK | 2s 813ms |
/parquet/datatypes/large string map | XFail | 11s 29ms |
/parquet/datatypes/largedouble | OK | 3s 498ms |
/parquet/datatypes/manydatatypes | OK | 3s 67ms |
/parquet/datatypes/manydatatypes2 | OK | 4s 60ms |
/parquet/datatypes/maps | OK | 2s 846ms |
/parquet/datatypes/nameswithemoji | OK | 3s 41ms |
/parquet/datatypes/nandouble | OK | 2s 913ms |
/parquet/datatypes/negativeint64 | OK | 4s 565ms |
/parquet/datatypes/nullbyte | OK | 2s 987ms |
/parquet/datatypes/nullbytemultiple | OK | 3s 102ms |
/parquet/datatypes/nullsinid | OK | 2s 556ms |
/parquet/datatypes/pandasdecimal | OK | 2s 789ms |
/parquet/datatypes/pandasdecimaldate | OK | 3s 237ms |
/parquet/datatypes/parquetgo | OK | 2s 883ms |
/parquet/datatypes/selectdatewithfilter | OK | 1m 33s |
/parquet/datatypes/singlenull | OK | 3s 256ms |
/parquet/datatypes/sparkv21 | OK | 2s 817ms |
/parquet/datatypes/sparkv22 | OK | 3s 251ms |
/parquet/datatypes/statdecimal | OK | 2s 520ms |
/parquet/datatypes/string | OK | 2s 560ms |
/parquet/datatypes/string int list inconsistent offset multiple batches | OK | 15s 68ms |
/parquet/datatypes/stringtypes | OK | 2s 501ms |
/parquet/datatypes/struct | OK | 2s 376ms |
/parquet/datatypes/supporteduuid | OK | 2s 175ms |
/parquet/datatypes/timestamp1 | OK | 2s 173ms |
/parquet/query/compression type/=LZ4 /select from mergetree table into file | OK | 4m 58s |
/parquet/datatypes/timestamp2 | OK | 2s 682ms |
/parquet/query/compression type/=NONE /select from mergetree table into file | OK | 4m 59s |
/parquet/query/compression type/=GZIP /select from mergetree table into file | OK | 4m 59s |
/parquet/datatypes/timezone | OK | 1s 485ms |
/parquet/datatypes/unsigned | OK | 3s 728ms |
/parquet/datatypes/unsupportednull | OK | 328ms |
/parquet/complex | OK | 46s 874ms |
/parquet/complex/arraystring | OK | 2s 885ms |
/parquet/complex/big tuple with nulls | OK | 2s 397ms |
/parquet/complex/bytearraydictionary | OK | 2s 342ms |
/parquet/complex/complex null | OK | 2s 75ms |
/parquet/complex/lagemap | OK | 2s 573ms |
/parquet/complex/largenestedarray | OK | 2s 509ms |
/parquet/complex/largestruct | OK | 2s 367ms |
/parquet/complex/largestruct2 | OK | 2s 460ms |
/parquet/complex/largestruct3 | OK | 2s 47ms |
/parquet/complex/list | OK | 2s 324ms |
/parquet/complex/nested array | OK | 2s 296ms |
/parquet/complex/nested map | OK | 2s 160ms |
/parquet/complex/nestedallcomplex | OK | 2s 751ms |
/parquet/complex/nestedarray2 | OK | 1s 941ms |
/parquet/complex/nestedstruct | OK | 2s 28ms |
/parquet/complex/nestedstruct2 | OK | 2s 521ms |
/parquet/complex/nestedstruct3 | OK | 2s 307ms |
/parquet/complex/nestedstruct4 | OK | 2s 188ms |
/parquet/complex/tupleofnulls | OK | 2s 399ms |
/parquet/complex/tuplewithdatetime | OK | 2s 213ms |
/parquet/cache | OK | 4s 736ms |
/parquet/cache/cache1 | OK | 2s 344ms |
/parquet/cache/cache2 | OK | 2s 351ms |
/parquet/glob | OK | 1m 21s |
/parquet/glob/fastparquet globs | OK | 25s 826ms |
/parquet/glob/glob1 | OK | 3s 417ms |
/parquet/glob/glob2 | OK | 4s 210ms |
/parquet/glob/glob with multiple elements | OK | 599ms |
/parquet/glob/million extensions | OK | 47s 225ms |
/parquet/rowgroups | OK | 4s 606ms |
/parquet/rowgroups/manyrowgroups | OK | 2s 448ms |
/parquet/rowgroups/manyrowgroups2 | OK | 2s 130ms |
/parquet/encrypted | Skip | 26ms |
/parquet/fastparquet | OK | 114ms |
/parquet/fastparquet/airlines | Skip | 6ms |
/parquet/fastparquet/baz | Skip | 11ms |
/parquet/fastparquet/empty date | Skip | 39ms |
/parquet/fastparquet/evo | Skip | 6ms |
/parquet/fastparquet/fastparquet | Skip | 8ms |
/parquet/read and write | OK | 21m 7s |
/parquet/read and write/read and write parquet file | OK | 21m 7s |
/parquet/column related errors | OK | 2s 524ms |
/parquet/column related errors/check error with 500 columns | OK | 2s 522ms |
/parquet/multi chunk upload | Skip | 2ms |
/parquet/query/compression type/=LZ4 /select from replicated mergetree table into file | OK | 3m 51s |
/parquet/query/compression type/=NONE /select from replicated mergetree table into file | OK | 3m 52s |
/parquet/query/compression type/=GZIP /select from replicated mergetree table into file | OK | 3m 50s |
/parquet/query/compression type/=LZ4 /select from distributed table into file | OK | 4m 23s |
/parquet/query/compression type/=GZIP /select from distributed table into file | OK | 4m 22s |
/parquet/query/compression type/=NONE /select from distributed table into file | OK | 4m 23s |
/parquet/query/compression type/=LZ4 /select from mat view into file | OK | 3m 42s |
/parquet/query/compression type/=GZIP /select from mat view into file | OK | 3m 39s |
/parquet/query/compression type/=NONE /select from mat view into file | OK | 3m 45s |
/parquet/query/compression type/=GZIP /insert into table with projection from file | OK | 1m 28s |
/parquet/query/compression type/=LZ4 /insert into table with projection from file | OK | 1m 27s |
/parquet/query/compression type/=NONE /insert into table with projection from file | OK | 1m 21s |
Generated by TestFlows Open-Source Test Framework v2.0.250110.1002922