Copyright 2025, Altinity Inc.. All Rights Reserved. All information contained herein is, and remains the property of Altinity Inc.. Any dissemination of this information or reproduction of this material is strictly forbidden unless prior written permission is obtained from Altinity Inc..
Date | Apr 03, 2025 22:10 |
Duration | 50m 20s |
Framework | TestFlows 2.0.250110.1002922 |
Test artifacts can be found at https://altinity-build-artifacts.s3.amazonaws.com/index.html#688/2ec25273d5c7eeb0782547e5a2383d0dc8ff24df/regression/aarch64/with_analyzer/zookeeper/without_thread_fuzzer/parquet/
project | Altinity/ClickHouse |
project.id | 159717931 |
package | https://s3.amazonaws.com/altinity-build-artifacts/PRs/688/2ec25273d5c7eeb0782547e5a2383d0dc8ff24df/package_aarch64/clickhouse-common-static_24.12.2.20238.altinityantalya_arm64.deb |
version | 24.12.2.20238.altinityantalya |
user.name | zvonand |
repository | https://github.com/Altinity/clickhouse-regression |
commit.hash | bd31e738c0cedaca253d15a05ed245c41b6e0b6a |
job.name | Parquet |
job.retry | 1 |
job.url | https://github.com/Altinity/ClickHouse/actions/runs/14252367922 |
arch | aarch64 |
local | True |
clickhouse_version | None |
clickhouse_path | https://s3.amazonaws.com/altinity-build-artifacts/PRs/688/2ec25273d5c7eeb0782547e5a2383d0dc8ff24df/package_aarch64/clickhouse-common-static_24.12.2.20238.altinityantalya_arm64.deb |
as_binary | False |
base_os | None |
keeper_path | None |
zookeeper_version | None |
use_keeper | False |
stress | False |
collect_service_logs | True |
thread_fuzzer | False |
with_analyzer | True |
reuse_env | False |
storages | None |
minio_uri | Secret(name='minio_uri') |
minio_root_user | Secret(name='minio_root_user') |
minio_root_password | Secret(name='minio_root_password') |
aws_s3_bucket | None |
aws_s3_region | Secret(name='aws_s3_region') |
aws_s3_key_id | Secret(name='aws_s3_key_id') |
aws_s3_access_key | Secret(name='aws_s3_access_key') |
gcs_uri | None |
gcs_key_id | None |
gcs_key_secret | None |
azure_account_name | None |
azure_storage_key | None |
azure_container | None |
native_parquet_reader | False |
stress_bloom | False |
Units | Skip | OK | Fail | Error | XFail | |
---|---|---|---|---|---|---|
Modules | ||||||
Suites | ||||||
Features | ||||||
Scenarios | ||||||
Checks | ||||||
Examples | ||||||
Steps |
Test Name | Result | Message |
---|---|---|
/parquet/postgresql/compression type/=GZIP /postgresql engine to parquet file to postgresql engine | XFail 37s 232ms This fails because of the difference in snapshot values. We used to capture the datetime value `0` be converted as 2106-02-07 06:28:16 instead of the correct 1970-01-01 01:00:00. But when steps are repeated manually, we can not reproduce it | AssertionError Traceback (most recent call last): File "/usr/lib/python3.12/threading.py", line 1030, in _bootstrap self._bootstrap_inner() File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner self.run() File "/usr/lib/python3.12/threading.py", line 1010, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.12/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py", line 827, in execute_query_step execute_query( File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py", line 897, in execute_query assert that(snapshot_result), error() ^^^^^^^^^^^^^^^^^^^^^ AssertionError: Oops! Assertion failed The following assertion was not satisfied assert that(snapshot_result), error() Assertion values assert that(snapshot_result), error() ^ is = SnapshotError( filename=/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/snapshots/common.py.postgresql engine to parquet file to postgresql engine.snapshot name=_parquet_postgresql_compression_type__GZIP__postgresql_engine_to_parquet_file_to_postgresql_engine_I_check_the_data_on_the_table_datetime snapshot_value=""" {"datetime":"2106-02-07 06:28:15","toTypeName(datetime)":"DateTime"} {"datetime":"2106-02-07 06:28:16","toTypeName(datetime)":"DateTime"} {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"2006-11-27 02:50:49","toTypeName(datetime)":"DateTime"} {"datetime":"2015-06-29 09:43:07","toTypeName(datetime)":"DateTime"} {"datetime":"2011-09-30 02:34:30","toTypeName(datetime)":"DateTime"} {"datetime":"2019-04-27 02:01:34","toTypeName(datetime)":"DateTime"} {"datetime":"2017-02-11 09:09:25","toTypeName(datetime)":"DateTime"} {"datetime":"2009-06-14 11:12:39","toTypeName(datetime)":"DateTime"} {"datetime":"2020-01-24 15:10:50","toTypeName(datetime)":"DateTime"} """, actual_value=""" {"datetime":"2106-02-07 06:28:15","toTypeName(datetime)":"DateTime"} {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"2006-11-27 02:50:49","toTypeName(datetime)":"DateTime"} {"datetime":"2015-06-29 09:43:07","toTypeName(datetime)":"DateTime"} {"datetime":"2011-09-30 02:34:30","toTypeName(datetime)":"DateTime"} {"datetime":"2019-04-27 02:01:34","toTypeName(datetime)":"DateTime"} {"datetime":"2017-02-11 09:09:25","toTypeName(datetime)":"DateTime"} {"datetime":"2009-06-14 11:12:39","toTypeName(datetime)":"DateTime"} {"datetime":"2020-01-24 15:10:50","toTypeName(datetime)":"DateTime"} """, diff=""" --- /home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/snapshots/common.py.postgresql engine to parquet file to postgresql engine.snapshot +++ @@ -1,6 +1,6 @@ {"datetime":"2106-02-07 06:28:15","toTypeName(datetime)":"DateTime"} -{"datetime":"2106-02-07 06:28:16","toTypeName(datetime)":"DateTime"} +{"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"2006-11-27 02:50:49","toTypeName(datetime)":"DateTime"} {"datetime":"2015-06-29 09:43:07","toTypeName(datetime)":"DateTime"} """) assert that(snapshot_result), error() ^ is False Where File '/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py', line 897 in 'execute_query' 889\| with values() as that: 890\| snapshot_result = snapshot( 891\| "\n" + r.output.strip() + "\n", 892\| id=snapshot_id, 893\| name=snapshot_name, 894\| encoder=str, 895\| mode=snapshot.CHECK, 896\| ) 897\|> assert that(snapshot_result), error() |
/parquet/postgresql/compression type/=NONE /postgresql engine to parquet file to postgresql engine | XFail 36s 713ms This fails because of the difference in snapshot values. We used to capture the datetime value `0` be converted as 2106-02-07 06:28:16 instead of the correct 1970-01-01 01:00:00. But when steps are repeated manually, we can not reproduce it | AssertionError Traceback (most recent call last): File "/usr/lib/python3.12/threading.py", line 1030, in _bootstrap self._bootstrap_inner() File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner self.run() File "/usr/lib/python3.12/threading.py", line 1010, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.12/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py", line 827, in execute_query_step execute_query( File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py", line 897, in execute_query assert that(snapshot_result), error() ^^^^^^^^^^^^^^^^^^^^^ AssertionError: Oops! Assertion failed The following assertion was not satisfied assert that(snapshot_result), error() Assertion values assert that(snapshot_result), error() ^ is = SnapshotError( filename=/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/snapshots/common.py.postgresql engine to parquet file to postgresql engine.snapshot name=_parquet_postgresql_compression_type__NONE__postgresql_engine_to_parquet_file_to_postgresql_engine_I_check_the_data_on_the_table_datetime snapshot_value=""" {"datetime":"2106-02-07 06:28:15","toTypeName(datetime)":"DateTime"} {"datetime":"2106-02-07 06:28:16","toTypeName(datetime)":"DateTime"} {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"2006-11-27 02:50:49","toTypeName(datetime)":"DateTime"} {"datetime":"2015-06-29 09:43:07","toTypeName(datetime)":"DateTime"} {"datetime":"2011-09-30 02:34:30","toTypeName(datetime)":"DateTime"} {"datetime":"2019-04-27 02:01:34","toTypeName(datetime)":"DateTime"} {"datetime":"2017-02-11 09:09:25","toTypeName(datetime)":"DateTime"} {"datetime":"2009-06-14 11:12:39","toTypeName(datetime)":"DateTime"} {"datetime":"2020-01-24 15:10:50","toTypeName(datetime)":"DateTime"} """, actual_value=""" {"datetime":"2106-02-07 06:28:15","toTypeName(datetime)":"DateTime"} {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"2006-11-27 02:50:49","toTypeName(datetime)":"DateTime"} {"datetime":"2015-06-29 09:43:07","toTypeName(datetime)":"DateTime"} {"datetime":"2011-09-30 02:34:30","toTypeName(datetime)":"DateTime"} {"datetime":"2019-04-27 02:01:34","toTypeName(datetime)":"DateTime"} {"datetime":"2017-02-11 09:09:25","toTypeName(datetime)":"DateTime"} {"datetime":"2009-06-14 11:12:39","toTypeName(datetime)":"DateTime"} {"datetime":"2020-01-24 15:10:50","toTypeName(datetime)":"DateTime"} """, diff=""" --- /home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/snapshots/common.py.postgresql engine to parquet file to postgresql engine.snapshot +++ @@ -1,6 +1,6 @@ {"datetime":"2106-02-07 06:28:15","toTypeName(datetime)":"DateTime"} -{"datetime":"2106-02-07 06:28:16","toTypeName(datetime)":"DateTime"} +{"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"2006-11-27 02:50:49","toTypeName(datetime)":"DateTime"} {"datetime":"2015-06-29 09:43:07","toTypeName(datetime)":"DateTime"} """) assert that(snapshot_result), error() ^ is False Where File '/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py', line 897 in 'execute_query' 889\| with values() as that: 890\| snapshot_result = snapshot( 891\| "\n" + r.output.strip() + "\n", 892\| id=snapshot_id, 893\| name=snapshot_name, 894\| encoder=str, 895\| mode=snapshot.CHECK, 896\| ) 897\|> assert that(snapshot_result), error() |
/parquet/postgresql/compression type/=LZ4 /postgresql engine to parquet file to postgresql engine | XFail 37s 580ms This fails because of the difference in snapshot values. We used to capture the datetime value `0` be converted as 2106-02-07 06:28:16 instead of the correct 1970-01-01 01:00:00. But when steps are repeated manually, we can not reproduce it | AssertionError Traceback (most recent call last): File "/usr/lib/python3.12/threading.py", line 1030, in _bootstrap self._bootstrap_inner() File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner self.run() File "/usr/lib/python3.12/threading.py", line 1010, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.12/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py", line 827, in execute_query_step execute_query( File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py", line 897, in execute_query assert that(snapshot_result), error() ^^^^^^^^^^^^^^^^^^^^^ AssertionError: Oops! Assertion failed The following assertion was not satisfied assert that(snapshot_result), error() Assertion values assert that(snapshot_result), error() ^ is = SnapshotError( filename=/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/snapshots/common.py.postgresql engine to parquet file to postgresql engine.snapshot name=_parquet_postgresql_compression_type__LZ4__postgresql_engine_to_parquet_file_to_postgresql_engine_I_check_the_data_on_the_table_datetime snapshot_value=""" {"datetime":"2106-02-07 06:28:15","toTypeName(datetime)":"DateTime"} {"datetime":"2106-02-07 06:28:16","toTypeName(datetime)":"DateTime"} {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"2006-11-27 02:50:49","toTypeName(datetime)":"DateTime"} {"datetime":"2015-06-29 09:43:07","toTypeName(datetime)":"DateTime"} {"datetime":"2011-09-30 02:34:30","toTypeName(datetime)":"DateTime"} {"datetime":"2019-04-27 02:01:34","toTypeName(datetime)":"DateTime"} {"datetime":"2017-02-11 09:09:25","toTypeName(datetime)":"DateTime"} {"datetime":"2009-06-14 11:12:39","toTypeName(datetime)":"DateTime"} {"datetime":"2020-01-24 15:10:50","toTypeName(datetime)":"DateTime"} """, actual_value=""" {"datetime":"2106-02-07 06:28:15","toTypeName(datetime)":"DateTime"} {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"2006-11-27 02:50:49","toTypeName(datetime)":"DateTime"} {"datetime":"2015-06-29 09:43:07","toTypeName(datetime)":"DateTime"} {"datetime":"2011-09-30 02:34:30","toTypeName(datetime)":"DateTime"} {"datetime":"2019-04-27 02:01:34","toTypeName(datetime)":"DateTime"} {"datetime":"2017-02-11 09:09:25","toTypeName(datetime)":"DateTime"} {"datetime":"2009-06-14 11:12:39","toTypeName(datetime)":"DateTime"} {"datetime":"2020-01-24 15:10:50","toTypeName(datetime)":"DateTime"} """, diff=""" --- /home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/snapshots/common.py.postgresql engine to parquet file to postgresql engine.snapshot +++ @@ -1,6 +1,6 @@ {"datetime":"2106-02-07 06:28:15","toTypeName(datetime)":"DateTime"} -{"datetime":"2106-02-07 06:28:16","toTypeName(datetime)":"DateTime"} +{"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"2006-11-27 02:50:49","toTypeName(datetime)":"DateTime"} {"datetime":"2015-06-29 09:43:07","toTypeName(datetime)":"DateTime"} """) assert that(snapshot_result), error() ^ is False Where File '/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py', line 897 in 'execute_query' 889\| with values() as that: 890\| snapshot_result = snapshot( 891\| "\n" + r.output.strip() + "\n", 892\| id=snapshot_id, 893\| name=snapshot_name, 894\| encoder=str, 895\| mode=snapshot.CHECK, 896\| ) 897\|> assert that(snapshot_result), error() |
/parquet/chunked array | XFail 15s 723ms Not supported | AssertionError Traceback (most recent call last): File "/usr/lib/python3.12/threading.py", line 1030, in _bootstrap self._bootstrap_inner() File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner self.run() File "/usr/lib/python3.12/threading.py", line 1010, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.12/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/chunked_array.py", line 30, in feature node.query( File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../helpers/cluster.py", line 1188, in query assert False, error(r.output) ^^^^^ AssertionError: Oops! Assertion failed The following assertion was not satisfied assert False, error(r.output) Description Error on processing query: Code: 33. DB::Exception: Error while reading Parquet data: NotImplemented: Nested data conversions not implemented for chunked array outputs: (in file/uri /var/lib/clickhouse/user_files/chunked_array_test_file.parquet): While executing ParquetBlockInputFormat: While executing File: data for INSERT was parsed from file. (CANNOT_READ_ALL_DATA) (version 24.12.2.20238.altinityantalya (altinity build)) (query: INSERT INTO table_18bcf349_10dc_11f0_b8b0_9600043155c8 FROM INFILE '/var/lib/clickhouse/user_files/chunked_array_test_file.parquet' FORMAT Parquet ) Assertion values assert False, error(r.output) ^ is False Where File '/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../helpers/cluster.py', line 1188 in 'query' 1180\| assert message in r.output, error(r.output) 1181\| 1182\| if not ignore_exception: 1183\| if message is None or "Exception:" not in message: 1184\| with Then("check if output has exception") if steps else NullStep(): 1185\| if "Exception:" in r.output: 1186\| if raise_on_exception: 1187\| raise QueryRuntimeException(r.output) 1188\|> assert False, error(r.output) 1189\| 1190\| return r 1191\| |
/parquet/datatypes/float16 | XFail 399ms ClickHouse does not import FLOAT16 properly | AssertionError Traceback (most recent call last): File "/usr/lib/python3.12/threading.py", line 1030, in _bootstrap self._bootstrap_inner() File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner self.run() File "/usr/lib/python3.12/threading.py", line 1010, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.12/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/datatypes.py", line 929, in feature scenario() File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/datatypes.py", line 113, in float16 assert output == expected, error() ^^^^^^^^^^^^^^^^^^ AssertionError: Oops! Assertion failed The following assertion was not satisfied assert output == expected, error() Assertion values assert output == expected, error() ^ is '[-0,0,32,2052,32838,0,0,0,0,0,0]' assert output == expected, error() ^ is '[-2,-1,0,1,2,3,4,5,6,7,8,9]' assert output == expected, error() ^ is = False @@ -1 +1 @@ -[-0,0,32,2052,32838,0,0,0,0,0,0] +[-2,-1,0,1,2,3,4,5,6,7,8,9] assert output == expected, error() ^ is False Where File '/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/datatypes.py', line 113 in 'float16' 105\| ORDER BY tuple() AS SELECT floatfield FROM file('{import_file}', Parquet) 106\| """ 107\| ) 108\| 109\| with Then("I read the contents of the created table"): 110\| output = node.query( 111\| f"SELECT groupArray(round(*)) FROM {table_name} FORMAT TSV" 112\| ).output 113\|> assert output == expected, error() 114\| 115\| finally: 116\| with Finally("I drop the table"): |
/parquet/datatypes/large string map | XFail 7s 620ms Will fail until the, https://github.com/apache/arrow/pull/35825, gets merged. | AssertionError Traceback (most recent call last): File "/usr/lib/python3.12/threading.py", line 1030, in _bootstrap self._bootstrap_inner() File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner self.run() File "/usr/lib/python3.12/threading.py", line 1010, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.12/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/datatypes.py", line 929, in feature scenario() File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/datatypes.py", line 801, in large_string_map import_export(snapshot_name="large_string_map_structure", import_file=import_file) File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/outline.py", line 36, in import_export node.query( File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../helpers/cluster.py", line 1188, in query assert False, error(r.output) ^^^^^ AssertionError: Oops! Assertion failed The following assertion was not satisfied assert False, error(r.output) Description Received exception from server (version 24.12.2): Code: 33. DB::Exception: Received from localhost:9000. DB::Exception: Error while reading Parquet data: NotImplemented: Nested data conversions not implemented for chunked array outputs: (in file/uri /var/lib/clickhouse/user_files/arrow/large_string_map.brotli.parquet): While executing ParquetBlockInputFormat: While executing File. (CANNOT_READ_ALL_DATA) (query: CREATE TABLE table_153510a7_10dd_11f0_b13c_9600043155c8 ENGINE = MergeTree ORDER BY tuple() AS SELECT * FROM file('arrow/large_string_map.brotli.parquet', Parquet) LIMIT 100 FORMAT TabSeparated ) Assertion values assert False, error(r.output) ^ is False Where File '/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../helpers/cluster.py', line 1188 in 'query' 1180\| assert message in r.output, error(r.output) 1181\| 1182\| if not ignore_exception: 1183\| if message is None or "Exception:" not in message: 1184\| with Then("check if output has exception") if steps else NullStep(): 1185\| if "Exception:" in r.output: 1186\| if raise_on_exception: 1187\| raise QueryRuntimeException(r.output) 1188\|> assert False, error(r.output) 1189\| 1190\| return r 1191\| |
Test Name | Result | Duration |
---|---|---|
/parquet | OK | 50m 20s |
/parquet/file | OK | 30m 36s |
/parquet/file/engine | OK | 30m 36s |
/parquet/file/engine/insert into engine | OK | 19m 27s |
/parquet/file/engine/select from engine | OK | 9m 32s |
/parquet/query | OK | 41m 15s |
/parquet/query/compression type | OK | 41m 15s |
/parquet/file/function | OK | 15m 57s |
/parquet/query/compression type/=NONE | OK | 41m 14s |
/parquet/query/compression type/=NONE /insert into memory table from file | OK | 9m 6s |
/parquet/query/compression type/=GZIP | OK | 41m 15s |
/parquet/query/compression type/=GZIP /insert into memory table from file | OK | 9m 8s |
/parquet/file/function/insert into function manual cast types | OK | 15m 36s |
/parquet/file/engine/engine to file to engine | OK | 26m 27s |
/parquet/file/function/insert into function auto cast types | OK | 15m 57s |
/parquet/file/function/select from function manual cast types | OK | 15m 25s |
/parquet/file/engine/insert into engine from file | OK | 18m 51s |
/parquet/file/function/select from function auto cast types | OK | 9m 34s |
/parquet/query/compression type/=LZ4 | OK | 41m 15s |
/parquet/file/engine/engine select output to file | OK | 30m 35s |
/parquet/list in multiple chunks | OK | 1m 9s |
/parquet/query/compression type/=LZ4 /insert into memory table from file | OK | 9m 7s |
/parquet/url | OK | 31m 37s |
/parquet/url/engine | OK | 30m 36s |
/parquet/url/function | OK | 30m 7s |
/parquet/url/engine/insert into engine | OK | 19m 30s |
/parquet/url/function/insert into function | OK | 15m 7s |
/parquet/url/engine/select from engine | OK | 9m 15s |
/parquet/url/engine/engine to file to engine | OK | 26m 15s |
/parquet/url/function/select from function manual cast types | OK | 30m 7s |
/parquet/url/engine/insert into engine from file | OK | 25m 24s |
/parquet/url/function/select from function auto cast types | OK | 14m 46s |
/parquet/url/engine/engine select output to file | OK | 30m 36s |
/parquet/mysql | OK | 1m 45s |
/parquet/mysql/compression type | OK | 1m 45s |
/parquet/mysql/compression type/=NONE | OK | 1m 45s |
/parquet/mysql/compression type/=NONE /mysql engine to parquet file to mysql engine | OK | 1m 17s |
/parquet/mysql/compression type/=GZIP | OK | 1m 43s |
/parquet/mysql/compression type/=GZIP /mysql engine to parquet file to mysql engine | OK | 1m 17s |
/parquet/mysql/compression type/=LZ4 | OK | 1m 45s |
/parquet/mysql/compression type/=LZ4 /mysql engine to parquet file to mysql engine | OK | 1m 17s |
/parquet/mysql/compression type/=GZIP /mysql function to parquet file to mysql function | OK | 25s 689ms |
/parquet/mysql/compression type/=LZ4 /mysql function to parquet file to mysql function | OK | 27s 929ms |
/parquet/mysql/compression type/=NONE /mysql function to parquet file to mysql function | OK | 27s 459ms |
/parquet/postgresql | OK | 1m 9s |
/parquet/postgresql/compression type | OK | 1m 9s |
/parquet/postgresql/compression type/=GZIP | OK | 1m 9s |
/parquet/postgresql/compression type/=LZ4 | OK | 1m 9s |
/parquet/postgresql/compression type/=NONE | OK | 1m 7s |
/parquet/postgresql/compression type/=GZIP /postgresql engine to parquet file to postgresql engine | XFail | 37s 232ms |
/parquet/postgresql/compression type/=NONE /postgresql engine to parquet file to postgresql engine | XFail | 36s 713ms |
/parquet/postgresql/compression type/=LZ4 /postgresql engine to parquet file to postgresql engine | XFail | 37s 580ms |
/parquet/postgresql/compression type/=NONE /postgresql function to parquet file to postgresql function | OK | 30s 164ms |
/parquet/postgresql/compression type/=GZIP /postgresql function to parquet file to postgresql function | OK | 32s 225ms |
/parquet/postgresql/compression type/=LZ4 /postgresql function to parquet file to postgresql function | OK | 31s 625ms |
/parquet/remote | OK | 18m 50s |
/parquet/remote/compression type | OK | 18m 49s |
/parquet/remote/compression type/=NONE | OK | 18m 49s |
/parquet/remote/compression type/=GZIP | OK | 18m 48s |
/parquet/remote/compression type/=LZ4 | OK | 18m 49s |
/parquet/remote/compression type/=NONE /outline | OK | 18m 49s |
/parquet/remote/compression type/=GZIP /outline | OK | 18m 48s |
/parquet/remote/compression type/=GZIP /outline/insert into function | OK | 6m 46s |
/parquet/remote/compression type/=LZ4 /outline | OK | 18m 49s |
/parquet/remote/compression type/=LZ4 /outline/insert into function | OK | 6m 46s |
/parquet/remote/compression type/=NONE /outline/insert into function | OK | 6m 46s |
/parquet/query/compression type/=NONE /insert into mergetree table from file | OK | 5m 22s |
/parquet/query/compression type/=LZ4 /insert into mergetree table from file | OK | 5m 21s |
/parquet/query/compression type/=GZIP /insert into mergetree table from file | OK | 5m 22s |
/parquet/remote/compression type/=LZ4 /outline/select from function | OK | 12m 2s |
/parquet/remote/compression type/=GZIP /outline/select from function | OK | 12m 2s |
/parquet/remote/compression type/=NONE /outline/select from function | OK | 12m 2s |
/parquet/query/compression type/=NONE /insert into replicated mergetree table from file | OK | 3m 57s |
/parquet/query/compression type/=LZ4 /insert into replicated mergetree table from file | OK | 3m 56s |
/parquet/query/compression type/=GZIP /insert into replicated mergetree table from file | OK | 3m 55s |
/parquet/query/compression type/=LZ4 /insert into distributed table from file | OK | 3m 22s |
/parquet/query/compression type/=NONE /insert into distributed table from file | OK | 3m 20s |
/parquet/query/compression type/=GZIP /insert into distributed table from file | OK | 3m 21s |
/parquet/query/compression type/=NONE /select from memory table into file | OK | 5m 16s |
/parquet/query/compression type/=GZIP /select from memory table into file | OK | 5m 19s |
/parquet/query/compression type/=LZ4 /select from memory table into file | OK | 5m 15s |
/parquet/chunked array | XFail | 15s 723ms |
/parquet/broken | OK | 394ms |
/parquet/broken/file | Skip | 11ms |
/parquet/broken/read broken bigint | Skip | 13ms |
/parquet/broken/read broken date | Skip | 29ms |
/parquet/broken/read broken int | Skip | 15ms |
/parquet/broken/read broken smallint | Skip | 55ms |
/parquet/broken/read broken timestamp ms | Skip | 12ms |
/parquet/broken/read broken timestamp us | Skip | 17ms |
/parquet/broken/read broken tinyint | Skip | 18ms |
/parquet/broken/read broken ubigint | Skip | 12ms |
/parquet/broken/read broken uint | Skip | 15ms |
/parquet/broken/read broken usmallint | Skip | 21ms |
/parquet/broken/read broken utinyint | Skip | 14ms |
/parquet/broken/string | Skip | 13ms |
/parquet/encoding | OK | 54s 499ms |
/parquet/encoding/deltabytearray1 | OK | 8s 280ms |
/parquet/encoding/deltabytearray2 | OK | 7s 760ms |
/parquet/encoding/deltalengthbytearray | OK | 7s 513ms |
/parquet/encoding/dictionary | OK | 7s 260ms |
/parquet/encoding/plain | OK | 7s 772ms |
/parquet/encoding/plainrlesnappy | OK | 8s 634ms |
/parquet/encoding/rleboolean | OK | 7s 219ms |
/parquet/compression | OK | 2m 34s |
/parquet/compression/arrow snappy | OK | 7s 243ms |
/parquet/compression/brotli | OK | 7s 517ms |
/parquet/compression/gzippages | OK | 14s 762ms |
/parquet/compression/largegzip | OK | 7s 830ms |
/parquet/compression/lz4 hadoop | OK | 7s 459ms |
/parquet/compression/lz4 hadoop large | OK | 7s 240ms |
/parquet/compression/lz4 non hadoop | OK | 7s 388ms |
/parquet/compression/lz4 raw | OK | 7s 518ms |
/parquet/compression/lz4 raw large | OK | 7s 292ms |
/parquet/compression/lz4pages | OK | 15s 118ms |
/parquet/compression/nonepages | OK | 14s 723ms |
/parquet/compression/snappypages | OK | 14s 475ms |
/parquet/compression/snappyplain | OK | 7s 98ms |
/parquet/compression/snappyrle | OK | 7s 499ms |
/parquet/compression/zstd | OK | 7s 122ms |
/parquet/compression/zstdpages | OK | 13s 940ms |
/parquet/datatypes | OK | 6m 51s |
/parquet/datatypes/arrowtimestamp | OK | 6s 753ms |
/parquet/datatypes/arrowtimestampms | OK | 9s 816ms |
/parquet/datatypes/binary | OK | 6s 944ms |
/parquet/query/compression type/=NONE /select from mergetree table into file | OK | 3m 25s |
/parquet/datatypes/binary string | OK | 8s 707ms |
/parquet/query/compression type/=LZ4 /select from mergetree table into file | OK | 3m 23s |
/parquet/query/compression type/=GZIP /select from mergetree table into file | OK | 3m 19s |
/parquet/datatypes/blob | OK | 12s 188ms |
/parquet/datatypes/boolean | OK | 11s 360ms |
/parquet/datatypes/byte array | OK | 6s 820ms |
/parquet/datatypes/columnname | OK | 6s 702ms |
/parquet/datatypes/columnwithnull | OK | 6s 814ms |
/parquet/datatypes/columnwithnull2 | OK | 6s 820ms |
/parquet/datatypes/date | OK | 6s 728ms |
/parquet/datatypes/decimal with filter | OK | 6s 778ms |
/parquet/datatypes/decimalvariousfilters | OK | 6s 510ms |
/parquet/datatypes/decimalwithfilter2 | OK | 6s 877ms |
/parquet/datatypes/enum | OK | 6s 858ms |
/parquet/datatypes/enum2 | OK | 6s 718ms |
/parquet/datatypes/fixed length decimal | OK | 6s 715ms |
/parquet/datatypes/fixed length decimal legacy | OK | 6s 978ms |
/parquet/datatypes/fixedstring | OK | 6s 843ms |
/parquet/datatypes/float16 | XFail | 399ms |
/parquet/datatypes/h2oai | OK | 6s 770ms |
/parquet/datatypes/hive | OK | 13s 487ms |
/parquet/datatypes/int32 | OK | 6s 726ms |
/parquet/datatypes/int32 decimal | OK | 6s 649ms |
/parquet/datatypes/int64 | OK | 7s 21ms |
/parquet/datatypes/int64 decimal | OK | 6s 898ms |
/parquet/datatypes/json | OK | 6s 637ms |
/parquet/datatypes/large string map | XFail | 7s 620ms |
/parquet/datatypes/largedouble | OK | 6s 936ms |
/parquet/datatypes/manydatatypes | OK | 6s 939ms |
/parquet/datatypes/manydatatypes2 | OK | 7s 47ms |
/parquet/query/compression type/=GZIP /select from replicated mergetree table into file | OK | 3m 2s |
/parquet/datatypes/maps | OK | 6s 746ms |
/parquet/query/compression type/=LZ4 /select from replicated mergetree table into file | OK | 3m 3s |
/parquet/query/compression type/=NONE /select from replicated mergetree table into file | OK | 3m 0s |
/parquet/datatypes/nameswithemoji | OK | 9s 688ms |
/parquet/complex | OK | 2m 23s |
/parquet/complex/arraystring | OK | 13s 865ms |
/parquet/datatypes/nandouble | OK | 11s 231ms |
/parquet/complex/big tuple with nulls | OK | 6s 720ms |
/parquet/datatypes/negativeint64 | OK | 6s 571ms |
/parquet/complex/bytearraydictionary | OK | 6s 604ms |
/parquet/datatypes/nullbyte | OK | 8s 671ms |
/parquet/complex/complex null | OK | 8s 370ms |
/parquet/datatypes/nullbytemultiple | OK | 6s 349ms |
/parquet/complex/lagemap | OK | 6s 541ms |
/parquet/datatypes/nullsinid | OK | 6s 516ms |
/parquet/complex/largenestedarray | OK | 6s 537ms |
/parquet/datatypes/pandasdecimal | OK | 6s 499ms |
/parquet/complex/largestruct | OK | 6s 312ms |
/parquet/datatypes/pandasdecimaldate | OK | 6s 376ms |
/parquet/complex/largestruct2 | OK | 6s 846ms |
/parquet/datatypes/parquetgo | OK | 6s 958ms |
/parquet/complex/largestruct3 | OK | 8s 89ms |
/parquet/cache | OK | 15s 811ms |
/parquet/cache/cache1 | OK | 8s 352ms |
/parquet/datatypes/selectdatewithfilter | OK | 12s 128ms |
/parquet/complex/list | OK | 7s 675ms |
/parquet/cache/cache2 | OK | 7s 454ms |
/parquet/complex/nested array | OK | 6s 697ms |
/parquet/glob | OK | 36s 715ms |
/parquet/glob/fastparquet globs | OK | 2s 81ms |
/parquet/datatypes/singlenull | OK | 6s 781ms |
/parquet/glob/glob1 | OK | 1s 645ms |
/parquet/glob/glob2 | OK | 1s 765ms |
/parquet/glob/glob with multiple elements | OK | 342ms |
/parquet/glob/million extensions | OK | 30s 868ms |
/parquet/complex/nested map | OK | 6s 708ms |
/parquet/datatypes/sparkv21 | OK | 6s 650ms |
/parquet/complex/nestedallcomplex | OK | 6s 555ms |
/parquet/datatypes/sparkv22 | OK | 6s 490ms |
/parquet/complex/nestedarray2 | OK | 6s 460ms |
/parquet/datatypes/statdecimal | OK | 6s 430ms |
/parquet/complex/nestedstruct | OK | 6s 483ms |
/parquet/datatypes/string | OK | 6s 549ms |
/parquet/complex/nestedstruct2 | OK | 6s 490ms |
/parquet/datatypes/string int list inconsistent offset multiple batches | OK | 11s 88ms |
/parquet/rowgroups | OK | 13s 380ms |
/parquet/rowgroups/manyrowgroups | OK | 6s 700ms |
/parquet/complex/nestedstruct3 | OK | 6s 588ms |
/parquet/rowgroups/manyrowgroups2 | OK | 6s 675ms |
/parquet/datatypes/stringtypes | OK | 6s 583ms |
/parquet/complex/nestedstruct4 | OK | 6s 644ms |
/parquet/encrypted | Skip | 1ms |
/parquet/fastparquet | OK | 13ms |
/parquet/fastparquet/airlines | Skip | 2ms |
/parquet/fastparquet/baz | Skip | 1ms |
/parquet/fastparquet/empty date | Skip | 1ms |
/parquet/fastparquet/evo | Skip | 1ms |
/parquet/fastparquet/fastparquet | Skip | 1ms |
/parquet/read and write | OK | 13m 33s |
/parquet/read and write/read and write parquet file | OK | 13m 33s |
/parquet/datatypes/struct | OK | 6s 713ms |
/parquet/complex/tupleofnulls | OK | 6s 771ms |
/parquet/datatypes/supporteduuid | OK | 6s 636ms |
/parquet/complex/tuplewithdatetime | OK | 6s 482ms |
/parquet/datatypes/timestamp1 | OK | 6s 514ms |
/parquet/column related errors | OK | 2s 39ms |
/parquet/column related errors/check error with 500 columns | OK | 2s 38ms |
/parquet/multi chunk upload | Skip | 1ms |
/parquet/datatypes/timestamp2 | OK | 6s 648ms |
/parquet/datatypes/timezone | OK | 6s 412ms |
/parquet/datatypes/unsigned | OK | 13s 40ms |
/parquet/query/compression type/=NONE /select from distributed table into file | OK | 3m 32s |
/parquet/query/compression type/=GZIP /select from distributed table into file | OK | 3m 32s |
/parquet/query/compression type/=LZ4 /select from distributed table into file | OK | 3m 31s |
/parquet/datatypes/unsupportednull | OK | 131ms |
/parquet/query/compression type/=NONE /select from mat view into file | OK | 2m 52s |
/parquet/query/compression type/=GZIP /select from mat view into file | OK | 2m 52s |
/parquet/query/compression type/=LZ4 /select from mat view into file | OK | 2m 52s |
/parquet/query/compression type/=NONE /insert into table with projection from file | OK | 1m 20s |
/parquet/query/compression type/=GZIP /insert into table with projection from file | OK | 1m 20s |
/parquet/query/compression type/=LZ4 /insert into table with projection from file | OK | 1m 20s |
Generated by TestFlows Open-Source Test Framework v2.0.250110.1002922