Copyright 2025, Altinity Inc.. All Rights Reserved. All information contained herein is, and remains the property of Altinity Inc.. Any dissemination of this information or reproduction of this material is strictly forbidden unless prior written permission is obtained from Altinity Inc..
Date | Apr 02, 2025 17:46 |
Duration | 1h 24m |
Framework | TestFlows 2.0.250110.1002922 |
Test artifacts can be found at https://altinity-build-artifacts.s3.amazonaws.com/index.html#710/b676761bbe14c4d08a569f184dedfb4236b18f49/regression/x86_64/with_analyzer/zookeeper/without_thread_fuzzer/parquet/
project | Altinity/ClickHouse |
project.id | 159717931 |
package | https://s3.amazonaws.com/altinity-build-artifacts/PRs/710/b676761bbe14c4d08a569f184dedfb4236b18f49/package_release/clickhouse-common-static_24.12.2.20224.altinityantalya_amd64.deb |
version | 24.12.2.20224.altinityantalya |
user.name | arthurpassos |
repository | https://github.com/Altinity/clickhouse-regression |
commit.hash | bd31e738c0cedaca253d15a05ed245c41b6e0b6a |
job.name | Parquet |
job.retry | 1 |
job.url | https://github.com/Altinity/ClickHouse/actions/runs/14221408726 |
arch | x86_64 |
local | True |
clickhouse_version | None |
clickhouse_path | https://s3.amazonaws.com/altinity-build-artifacts/PRs/710/b676761bbe14c4d08a569f184dedfb4236b18f49/package_release/clickhouse-common-static_24.12.2.20224.altinityantalya_amd64.deb |
as_binary | False |
base_os | None |
keeper_path | None |
zookeeper_version | None |
use_keeper | False |
stress | False |
collect_service_logs | True |
thread_fuzzer | False |
with_analyzer | True |
reuse_env | False |
storages | None |
minio_uri | Secret(name='minio_uri') |
minio_root_user | Secret(name='minio_root_user') |
minio_root_password | Secret(name='minio_root_password') |
aws_s3_bucket | None |
aws_s3_region | Secret(name='aws_s3_region') |
aws_s3_key_id | Secret(name='aws_s3_key_id') |
aws_s3_access_key | Secret(name='aws_s3_access_key') |
gcs_uri | None |
gcs_key_id | None |
gcs_key_secret | None |
azure_account_name | None |
azure_storage_key | None |
azure_container | None |
native_parquet_reader | False |
stress_bloom | False |
Units | Skip | OK | Fail | XFail | |
---|---|---|---|---|---|
Modules | |||||
Suites | |||||
Features | |||||
Scenarios | |||||
Checks | |||||
Examples | |||||
Steps |
Test Name | Result | Message |
---|---|---|
/parquet/postgresql/compression type/=LZ4 /postgresql engine to parquet file to postgresql engine | XFail 30s 18ms This fails because of the difference in snapshot values. We used to capture the datetime value `0` be converted as 2106-02-07 06:28:16 instead of the correct 1970-01-01 01:00:00. But when steps are repeated manually, we can not reproduce it | AssertionError Traceback (most recent call last): File "/usr/lib/python3.12/threading.py", line 1030, in _bootstrap self._bootstrap_inner() File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner self.run() File "/usr/lib/python3.12/threading.py", line 1010, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.12/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py", line 827, in execute_query_step execute_query( File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py", line 897, in execute_query assert that(snapshot_result), error() ^^^^^^^^^^^^^^^^^^^^^ AssertionError: Oops! Assertion failed The following assertion was not satisfied assert that(snapshot_result), error() Assertion values assert that(snapshot_result), error() ^ is = SnapshotError( filename=/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/snapshots/common.py.postgresql engine to parquet file to postgresql engine.snapshot name=_parquet_postgresql_compression_type__LZ4__postgresql_engine_to_parquet_file_to_postgresql_engine_I_check_the_data_on_the_table_datetime snapshot_value=""" {"datetime":"2106-02-07 06:28:15","toTypeName(datetime)":"DateTime"} {"datetime":"2106-02-07 06:28:16","toTypeName(datetime)":"DateTime"} {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"2006-11-27 02:50:49","toTypeName(datetime)":"DateTime"} {"datetime":"2015-06-29 09:43:07","toTypeName(datetime)":"DateTime"} {"datetime":"2011-09-30 02:34:30","toTypeName(datetime)":"DateTime"} {"datetime":"2019-04-27 02:01:34","toTypeName(datetime)":"DateTime"} {"datetime":"2017-02-11 09:09:25","toTypeName(datetime)":"DateTime"} {"datetime":"2009-06-14 11:12:39","toTypeName(datetime)":"DateTime"} {"datetime":"2020-01-24 15:10:50","toTypeName(datetime)":"DateTime"} """, actual_value=""" {"datetime":"2106-02-07 06:28:15","toTypeName(datetime)":"DateTime"} {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"2006-11-27 02:50:49","toTypeName(datetime)":"DateTime"} {"datetime":"2015-06-29 09:43:07","toTypeName(datetime)":"DateTime"} {"datetime":"2011-09-30 02:34:30","toTypeName(datetime)":"DateTime"} {"datetime":"2019-04-27 02:01:34","toTypeName(datetime)":"DateTime"} {"datetime":"2017-02-11 09:09:25","toTypeName(datetime)":"DateTime"} {"datetime":"2009-06-14 11:12:39","toTypeName(datetime)":"DateTime"} {"datetime":"2020-01-24 15:10:50","toTypeName(datetime)":"DateTime"} """, diff=""" --- /home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/snapshots/common.py.postgresql engine to parquet file to postgresql engine.snapshot +++ @@ -1,6 +1,6 @@ {"datetime":"2106-02-07 06:28:15","toTypeName(datetime)":"DateTime"} -{"datetime":"2106-02-07 06:28:16","toTypeName(datetime)":"DateTime"} +{"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"2006-11-27 02:50:49","toTypeName(datetime)":"DateTime"} {"datetime":"2015-06-29 09:43:07","toTypeName(datetime)":"DateTime"} """) assert that(snapshot_result), error() ^ is False Where File '/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py', line 897 in 'execute_query' 889\| with values() as that: 890\| snapshot_result = snapshot( 891\| "\n" + r.output.strip() + "\n", 892\| id=snapshot_id, 893\| name=snapshot_name, 894\| encoder=str, 895\| mode=snapshot.CHECK, 896\| ) 897\|> assert that(snapshot_result), error() |
/parquet/postgresql/compression type/=GZIP /postgresql engine to parquet file to postgresql engine | XFail 30s 198ms This fails because of the difference in snapshot values. We used to capture the datetime value `0` be converted as 2106-02-07 06:28:16 instead of the correct 1970-01-01 01:00:00. But when steps are repeated manually, we can not reproduce it | AssertionError Traceback (most recent call last): File "/usr/lib/python3.12/threading.py", line 1030, in _bootstrap self._bootstrap_inner() File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner self.run() File "/usr/lib/python3.12/threading.py", line 1010, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.12/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py", line 827, in execute_query_step execute_query( File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py", line 897, in execute_query assert that(snapshot_result), error() ^^^^^^^^^^^^^^^^^^^^^ AssertionError: Oops! Assertion failed The following assertion was not satisfied assert that(snapshot_result), error() Assertion values assert that(snapshot_result), error() ^ is = SnapshotError( filename=/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/snapshots/common.py.postgresql engine to parquet file to postgresql engine.snapshot name=_parquet_postgresql_compression_type__GZIP__postgresql_engine_to_parquet_file_to_postgresql_engine_I_check_the_data_on_the_table_datetime snapshot_value=""" {"datetime":"2106-02-07 06:28:15","toTypeName(datetime)":"DateTime"} {"datetime":"2106-02-07 06:28:16","toTypeName(datetime)":"DateTime"} {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"2006-11-27 02:50:49","toTypeName(datetime)":"DateTime"} {"datetime":"2015-06-29 09:43:07","toTypeName(datetime)":"DateTime"} {"datetime":"2011-09-30 02:34:30","toTypeName(datetime)":"DateTime"} {"datetime":"2019-04-27 02:01:34","toTypeName(datetime)":"DateTime"} {"datetime":"2017-02-11 09:09:25","toTypeName(datetime)":"DateTime"} {"datetime":"2009-06-14 11:12:39","toTypeName(datetime)":"DateTime"} {"datetime":"2020-01-24 15:10:50","toTypeName(datetime)":"DateTime"} """, actual_value=""" {"datetime":"2106-02-07 06:28:15","toTypeName(datetime)":"DateTime"} {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"2006-11-27 02:50:49","toTypeName(datetime)":"DateTime"} {"datetime":"2015-06-29 09:43:07","toTypeName(datetime)":"DateTime"} {"datetime":"2011-09-30 02:34:30","toTypeName(datetime)":"DateTime"} {"datetime":"2019-04-27 02:01:34","toTypeName(datetime)":"DateTime"} {"datetime":"2017-02-11 09:09:25","toTypeName(datetime)":"DateTime"} {"datetime":"2009-06-14 11:12:39","toTypeName(datetime)":"DateTime"} {"datetime":"2020-01-24 15:10:50","toTypeName(datetime)":"DateTime"} """, diff=""" --- /home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/snapshots/common.py.postgresql engine to parquet file to postgresql engine.snapshot +++ @@ -1,6 +1,6 @@ {"datetime":"2106-02-07 06:28:15","toTypeName(datetime)":"DateTime"} -{"datetime":"2106-02-07 06:28:16","toTypeName(datetime)":"DateTime"} +{"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"2006-11-27 02:50:49","toTypeName(datetime)":"DateTime"} {"datetime":"2015-06-29 09:43:07","toTypeName(datetime)":"DateTime"} """) assert that(snapshot_result), error() ^ is False Where File '/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py', line 897 in 'execute_query' 889\| with values() as that: 890\| snapshot_result = snapshot( 891\| "\n" + r.output.strip() + "\n", 892\| id=snapshot_id, 893\| name=snapshot_name, 894\| encoder=str, 895\| mode=snapshot.CHECK, 896\| ) 897\|> assert that(snapshot_result), error() |
/parquet/postgresql/compression type/=NONE /postgresql engine to parquet file to postgresql engine | XFail 29s 939ms This fails because of the difference in snapshot values. We used to capture the datetime value `0` be converted as 2106-02-07 06:28:16 instead of the correct 1970-01-01 01:00:00. But when steps are repeated manually, we can not reproduce it | AssertionError Traceback (most recent call last): File "/usr/lib/python3.12/threading.py", line 1030, in _bootstrap self._bootstrap_inner() File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner self.run() File "/usr/lib/python3.12/threading.py", line 1010, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.12/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py", line 827, in execute_query_step execute_query( File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py", line 897, in execute_query assert that(snapshot_result), error() ^^^^^^^^^^^^^^^^^^^^^ AssertionError: Oops! Assertion failed The following assertion was not satisfied assert that(snapshot_result), error() Assertion values assert that(snapshot_result), error() ^ is = SnapshotError( filename=/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/snapshots/common.py.postgresql engine to parquet file to postgresql engine.snapshot name=_parquet_postgresql_compression_type__NONE__postgresql_engine_to_parquet_file_to_postgresql_engine_I_check_the_data_on_the_table_datetime snapshot_value=""" {"datetime":"2106-02-07 06:28:15","toTypeName(datetime)":"DateTime"} {"datetime":"2106-02-07 06:28:16","toTypeName(datetime)":"DateTime"} {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"2006-11-27 02:50:49","toTypeName(datetime)":"DateTime"} {"datetime":"2015-06-29 09:43:07","toTypeName(datetime)":"DateTime"} {"datetime":"2011-09-30 02:34:30","toTypeName(datetime)":"DateTime"} {"datetime":"2019-04-27 02:01:34","toTypeName(datetime)":"DateTime"} {"datetime":"2017-02-11 09:09:25","toTypeName(datetime)":"DateTime"} {"datetime":"2009-06-14 11:12:39","toTypeName(datetime)":"DateTime"} {"datetime":"2020-01-24 15:10:50","toTypeName(datetime)":"DateTime"} """, actual_value=""" {"datetime":"2106-02-07 06:28:15","toTypeName(datetime)":"DateTime"} {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"2006-11-27 02:50:49","toTypeName(datetime)":"DateTime"} {"datetime":"2015-06-29 09:43:07","toTypeName(datetime)":"DateTime"} {"datetime":"2011-09-30 02:34:30","toTypeName(datetime)":"DateTime"} {"datetime":"2019-04-27 02:01:34","toTypeName(datetime)":"DateTime"} {"datetime":"2017-02-11 09:09:25","toTypeName(datetime)":"DateTime"} {"datetime":"2009-06-14 11:12:39","toTypeName(datetime)":"DateTime"} {"datetime":"2020-01-24 15:10:50","toTypeName(datetime)":"DateTime"} """, diff=""" --- /home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/snapshots/common.py.postgresql engine to parquet file to postgresql engine.snapshot +++ @@ -1,6 +1,6 @@ {"datetime":"2106-02-07 06:28:15","toTypeName(datetime)":"DateTime"} -{"datetime":"2106-02-07 06:28:16","toTypeName(datetime)":"DateTime"} +{"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"2006-11-27 02:50:49","toTypeName(datetime)":"DateTime"} {"datetime":"2015-06-29 09:43:07","toTypeName(datetime)":"DateTime"} """) assert that(snapshot_result), error() ^ is False Where File '/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py', line 897 in 'execute_query' 889\| with values() as that: 890\| snapshot_result = snapshot( 891\| "\n" + r.output.strip() + "\n", 892\| id=snapshot_id, 893\| name=snapshot_name, 894\| encoder=str, 895\| mode=snapshot.CHECK, 896\| ) 897\|> assert that(snapshot_result), error() |
/parquet/chunked array | XFail 41s 120ms Not supported | AssertionError Traceback (most recent call last): File "/usr/lib/python3.12/threading.py", line 1030, in _bootstrap self._bootstrap_inner() File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner self.run() File "/usr/lib/python3.12/threading.py", line 1010, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.12/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/chunked_array.py", line 30, in feature node.query( File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../helpers/cluster.py", line 1188, in query assert False, error(r.output) ^^^^^ AssertionError: Oops! Assertion failed The following assertion was not satisfied assert False, error(r.output) Description Error on processing query: Code: 33. DB::Exception: Error while reading Parquet data: NotImplemented: Nested data conversions not implemented for chunked array outputs: (in file/uri /var/lib/clickhouse/user_files/chunked_array_test_file.parquet): While executing ParquetBlockInputFormat: While executing File: data for INSERT was parsed from file. (CANNOT_READ_ALL_DATA) (version 24.12.2.20224.altinityantalya (altinity build)) (query: INSERT INTO table_f57fd8ef_0ff0_11f0_85a6_960004305699 FROM INFILE '/var/lib/clickhouse/user_files/chunked_array_test_file.parquet' FORMAT Parquet ) Assertion values assert False, error(r.output) ^ is False Where File '/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../helpers/cluster.py', line 1188 in 'query' 1180\| assert message in r.output, error(r.output) 1181\| 1182\| if not ignore_exception: 1183\| if message is None or "Exception:" not in message: 1184\| with Then("check if output has exception") if steps else NullStep(): 1185\| if "Exception:" in r.output: 1186\| if raise_on_exception: 1187\| raise QueryRuntimeException(r.output) 1188\|> assert False, error(r.output) 1189\| 1190\| return r 1191\| |
/parquet/datatypes/float16 | XFail 1s 11ms ClickHouse does not import FLOAT16 properly | AssertionError Traceback (most recent call last): File "/usr/lib/python3.12/threading.py", line 1030, in _bootstrap self._bootstrap_inner() File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner self.run() File "/usr/lib/python3.12/threading.py", line 1010, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.12/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/datatypes.py", line 929, in feature scenario() File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/datatypes.py", line 113, in float16 assert output == expected, error() ^^^^^^^^^^^^^^^^^^ AssertionError: Oops! Assertion failed The following assertion was not satisfied assert output == expected, error() Assertion values assert output == expected, error() ^ is '[-0,0,32,2052,32838,0,0,0,0,0,0]' assert output == expected, error() ^ is '[-2,-1,0,1,2,3,4,5,6,7,8,9]' assert output == expected, error() ^ is = False @@ -1 +1 @@ -[-0,0,32,2052,32838,0,0,0,0,0,0] +[-2,-1,0,1,2,3,4,5,6,7,8,9] assert output == expected, error() ^ is False Where File '/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/datatypes.py', line 113 in 'float16' 105\| ORDER BY tuple() AS SELECT floatfield FROM file('{import_file}', Parquet) 106\| """ 107\| ) 108\| 109\| with Then("I read the contents of the created table"): 110\| output = node.query( 111\| f"SELECT groupArray(round(*)) FROM {table_name} FORMAT TSV" 112\| ).output 113\|> assert output == expected, error() 114\| 115\| finally: 116\| with Finally("I drop the table"): |
/parquet/datatypes/large string map | XFail 16s 715ms Will fail until the, https://github.com/apache/arrow/pull/35825, gets merged. | AssertionError Traceback (most recent call last): File "/usr/lib/python3.12/threading.py", line 1030, in _bootstrap self._bootstrap_inner() File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner self.run() File "/usr/lib/python3.12/threading.py", line 1010, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.12/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/datatypes.py", line 929, in feature scenario() File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/datatypes.py", line 801, in large_string_map import_export(snapshot_name="large_string_map_structure", import_file=import_file) File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/outline.py", line 36, in import_export node.query( File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../helpers/cluster.py", line 1188, in query assert False, error(r.output) ^^^^^ AssertionError: Oops! Assertion failed The following assertion was not satisfied assert False, error(r.output) Description Received exception from server (version 24.12.2): Code: 33. DB::Exception: Received from localhost:9000. DB::Exception: Error while reading Parquet data: NotImplemented: Nested data conversions not implemented for chunked array outputs: (in file/uri /var/lib/clickhouse/user_files/arrow/large_string_map.brotli.parquet): While executing ParquetBlockInputFormat: While executing File. (CANNOT_READ_ALL_DATA) (query: CREATE TABLE table_849658d7_0ff1_11f0_b84b_960004305699 ENGINE = MergeTree ORDER BY tuple() AS SELECT * FROM file('arrow/large_string_map.brotli.parquet', Parquet) LIMIT 100 FORMAT TabSeparated ) Assertion values assert False, error(r.output) ^ is False Where File '/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../helpers/cluster.py', line 1188 in 'query' 1180\| assert message in r.output, error(r.output) 1181\| 1182\| if not ignore_exception: 1183\| if message is None or "Exception:" not in message: 1184\| with Then("check if output has exception") if steps else NullStep(): 1185\| if "Exception:" in r.output: 1186\| if raise_on_exception: 1187\| raise QueryRuntimeException(r.output) 1188\|> assert False, error(r.output) 1189\| 1190\| return r 1191\| |
Test Name | Result | Duration |
---|---|---|
/parquet | OK | 1h 24m |
/parquet/file | OK | 52m 40s |
/parquet/file/engine | OK | 52m 40s |
/parquet/file/engine/insert into engine | OK | 32m 4s |
/parquet/file/function | OK | 26m 2s |
/parquet/file/function/insert into function manual cast types | OK | 25m 19s |
/parquet/file/engine/select from engine | OK | 13m 4s |
/parquet/file/engine/engine to file to engine | OK | 44m 43s |
/parquet/file/function/insert into function auto cast types | OK | 26m 2s |
/parquet/file/function/select from function manual cast types | OK | 14m 29s |
/parquet/file/engine/insert into engine from file | OK | 30m 53s |
/parquet/file/function/select from function auto cast types | OK | 13m 0s |
/parquet/query | OK | 1h 10m |
/parquet/file/engine/engine select output to file | OK | 52m 40s |
/parquet/query/compression type | OK | 1h 10m |
/parquet/list in multiple chunks | OK | 13m 5s |
/parquet/url | OK | 54m 16s |
/parquet/query/compression type/=NONE | OK | 1h 10m |
/parquet/query/compression type/=GZIP | OK | 1h 10m |
/parquet/query/compression type/=LZ4 | OK | 1h 10m |
/parquet/query/compression type/=NONE /insert into memory table from file | OK | 12m 38s |
/parquet/query/compression type/=GZIP /insert into memory table from file | OK | 12m 38s |
/parquet/query/compression type/=LZ4 /insert into memory table from file | OK | 12m 33s |
/parquet/url/engine | OK | 53m 3s |
/parquet/url/function | OK | 27m 21s |
/parquet/url/engine/insert into engine | OK | 32m 52s |
/parquet/url/engine/select from engine | OK | 13m 0s |
/parquet/url/function/insert into function | OK | 25m 8s |
/parquet/url/engine/engine to file to engine | OK | 45m 4s |
/parquet/url/function/select from function manual cast types | OK | 27m 21s |
/parquet/url/engine/insert into engine from file | OK | 43m 10s |
/parquet/url/function/select from function auto cast types | OK | 24m 52s |
/parquet/url/engine/engine select output to file | OK | 53m 3s |
/parquet/query/compression type/=LZ4 /insert into mergetree table from file | OK | 10m 47s |
/parquet/query/compression type/=GZIP /insert into mergetree table from file | OK | 10m 46s |
/parquet/query/compression type/=NONE /insert into mergetree table from file | OK | 10m 45s |
/parquet/mysql | OK | 1m 15s |
/parquet/mysql/compression type | OK | 1m 15s |
/parquet/mysql/compression type/=NONE | OK | 1m 12s |
/parquet/mysql/compression type/=GZIP | OK | 1m 14s |
/parquet/mysql/compression type/=LZ4 | OK | 1m 14s |
/parquet/mysql/compression type/=GZIP /mysql engine to parquet file to mysql engine | OK | 37s 312ms |
/parquet/mysql/compression type/=NONE /mysql engine to parquet file to mysql engine | OK | 38s 268ms |
/parquet/mysql/compression type/=LZ4 /mysql engine to parquet file to mysql engine | OK | 38s 119ms |
/parquet/mysql/compression type/=GZIP /mysql function to parquet file to mysql function | OK | 36s 590ms |
/parquet/mysql/compression type/=LZ4 /mysql function to parquet file to mysql function | OK | 36s 569ms |
/parquet/mysql/compression type/=NONE /mysql function to parquet file to mysql function | OK | 33s 867ms |
/parquet/postgresql | OK | 1m 0s |
/parquet/postgresql/compression type | OK | 1m 0s |
/parquet/postgresql/compression type/=NONE | OK | 59s 997ms |
/parquet/postgresql/compression type/=GZIP | OK | 59s 710ms |
/parquet/postgresql/compression type/=LZ4 | OK | 1m 0s |
/parquet/postgresql/compression type/=LZ4 /postgresql engine to parquet file to postgresql engine | XFail | 30s 18ms |
/parquet/postgresql/compression type/=GZIP /postgresql engine to parquet file to postgresql engine | XFail | 30s 198ms |
/parquet/postgresql/compression type/=NONE /postgresql engine to parquet file to postgresql engine | XFail | 29s 939ms |
/parquet/postgresql/compression type/=NONE /postgresql function to parquet file to postgresql function | OK | 29s 723ms |
/parquet/postgresql/compression type/=LZ4 /postgresql function to parquet file to postgresql function | OK | 29s 827ms |
/parquet/postgresql/compression type/=GZIP /postgresql function to parquet file to postgresql function | OK | 29s 258ms |
/parquet/remote | OK | 25m 32s |
/parquet/remote/compression type | OK | 25m 32s |
/parquet/remote/compression type/=NONE | OK | 25m 27s |
/parquet/remote/compression type/=GZIP | OK | 25m 32s |
/parquet/remote/compression type/=LZ4 | OK | 25m 30s |
/parquet/remote/compression type/=NONE /outline | OK | 25m 26s |
/parquet/remote/compression type/=GZIP /outline | OK | 25m 31s |
/parquet/remote/compression type/=LZ4 /outline | OK | 25m 30s |
/parquet/remote/compression type/=NONE /outline/insert into function | OK | 10m 21s |
/parquet/remote/compression type/=GZIP /outline/insert into function | OK | 10m 19s |
/parquet/remote/compression type/=LZ4 /outline/insert into function | OK | 10m 17s |
/parquet/query/compression type/=LZ4 /insert into replicated mergetree table from file | OK | 7m 8s |
/parquet/query/compression type/=NONE /insert into replicated mergetree table from file | OK | 7m 3s |
/parquet/query/compression type/=GZIP /insert into replicated mergetree table from file | OK | 7m 4s |
/parquet/remote/compression type/=LZ4 /outline/select from function | OK | 15m 12s |
/parquet/remote/compression type/=GZIP /outline/select from function | OK | 15m 11s |
/parquet/remote/compression type/=NONE /outline/select from function | OK | 15m 5s |
/parquet/query/compression type/=NONE /insert into distributed table from file | OK | 4m 55s |
/parquet/query/compression type/=LZ4 /insert into distributed table from file | OK | 4m 55s |
/parquet/query/compression type/=GZIP /insert into distributed table from file | OK | 4m 57s |
/parquet/query/compression type/=NONE /select from memory table into file | OK | 10m 18s |
/parquet/query/compression type/=LZ4 /select from memory table into file | OK | 10m 18s |
/parquet/query/compression type/=GZIP /select from memory table into file | OK | 10m 19s |
/parquet/chunked array | XFail | 41s 120ms |
/parquet/broken | OK | 866ms |
/parquet/broken/file | Skip | 24ms |
/parquet/broken/read broken bigint | Skip | 59ms |
/parquet/broken/read broken date | Skip | 78ms |
/parquet/broken/read broken int | Skip | 75ms |
/parquet/broken/read broken smallint | Skip | 23ms |
/parquet/broken/read broken timestamp ms | Skip | 31ms |
/parquet/broken/read broken timestamp us | Skip | 23ms |
/parquet/broken/read broken tinyint | Skip | 62ms |
/parquet/broken/read broken ubigint | Skip | 74ms |
/parquet/broken/read broken uint | Skip | 12ms |
/parquet/broken/read broken usmallint | Skip | 22ms |
/parquet/broken/read broken utinyint | Skip | 37ms |
/parquet/broken/string | Skip | 31ms |
/parquet/encoding | OK | 27s 678ms |
/parquet/encoding/deltabytearray1 | OK | 5s 137ms |
/parquet/encoding/deltabytearray2 | OK | 4s 263ms |
/parquet/encoding/deltalengthbytearray | OK | 3s 295ms |
/parquet/encoding/dictionary | OK | 3s 373ms |
/parquet/encoding/plain | OK | 3s 600ms |
/parquet/encoding/plainrlesnappy | OK | 4s 772ms |
/parquet/encoding/rleboolean | OK | 3s 80ms |
/parquet/compression | OK | 1m 11s |
/parquet/compression/arrow snappy | OK | 2s 977ms |
/parquet/compression/brotli | OK | 3s 105ms |
/parquet/compression/gzippages | OK | 6s 313ms |
/parquet/compression/largegzip | OK | 3s 514ms |
/parquet/compression/lz4 hadoop | OK | 3s 230ms |
/parquet/compression/lz4 hadoop large | OK | 3s 145ms |
/parquet/compression/lz4 non hadoop | OK | 3s 83ms |
/parquet/compression/lz4 raw | OK | 3s 363ms |
/parquet/compression/lz4 raw large | OK | 2s 963ms |
/parquet/compression/lz4pages | OK | 6s 137ms |
/parquet/compression/nonepages | OK | 6s 871ms |
/parquet/compression/snappypages | OK | 7s 519ms |
/parquet/compression/snappyplain | OK | 4s 467ms |
/parquet/compression/snappyrle | OK | 3s 842ms |
/parquet/compression/zstd | OK | 3s 897ms |
/parquet/compression/zstdpages | OK | 6s 689ms |
/parquet/datatypes | OK | 4m 19s |
/parquet/datatypes/arrowtimestamp | OK | 3s 366ms |
/parquet/datatypes/arrowtimestampms | OK | 2s 895ms |
/parquet/datatypes/binary | OK | 2s 927ms |
/parquet/datatypes/binary string | OK | 3s 98ms |
/parquet/datatypes/blob | OK | 3s 550ms |
/parquet/datatypes/boolean | OK | 3s 345ms |
/parquet/datatypes/byte array | OK | 3s 901ms |
/parquet/datatypes/columnname | OK | 3s 226ms |
/parquet/datatypes/columnwithnull | OK | 3s 201ms |
/parquet/datatypes/columnwithnull2 | OK | 3s 303ms |
/parquet/datatypes/date | OK | 2s 887ms |
/parquet/datatypes/decimal with filter | OK | 3s 896ms |
/parquet/datatypes/decimalvariousfilters | OK | 3s 200ms |
/parquet/datatypes/decimalwithfilter2 | OK | 3s 471ms |
/parquet/datatypes/enum | OK | 4s 608ms |
/parquet/datatypes/enum2 | OK | 3s 683ms |
/parquet/datatypes/fixed length decimal | OK | 3s 714ms |
/parquet/datatypes/fixed length decimal legacy | OK | 4s 302ms |
/parquet/datatypes/fixedstring | OK | 3s 585ms |
/parquet/datatypes/float16 | XFail | 1s 11ms |
/parquet/datatypes/h2oai | OK | 3s 903ms |
/parquet/datatypes/hive | OK | 6s 796ms |
/parquet/datatypes/int32 | OK | 4s 142ms |
/parquet/datatypes/int32 decimal | OK | 4s 439ms |
/parquet/datatypes/int64 | OK | 3s 262ms |
/parquet/datatypes/int64 decimal | OK | 3s 793ms |
/parquet/datatypes/json | OK | 4s 745ms |
/parquet/datatypes/large string map | XFail | 16s 715ms |
/parquet/datatypes/largedouble | OK | 3s 488ms |
/parquet/datatypes/manydatatypes | OK | 2s 99ms |
/parquet/datatypes/manydatatypes2 | OK | 3s 125ms |
/parquet/datatypes/maps | OK | 2s 376ms |
/parquet/datatypes/nameswithemoji | OK | 2s 302ms |
/parquet/datatypes/nandouble | OK | 4s 22ms |
/parquet/datatypes/negativeint64 | OK | 2s 322ms |
/parquet/datatypes/nullbyte | OK | 2s 411ms |
/parquet/datatypes/nullbytemultiple | OK | 2s 815ms |
/parquet/datatypes/nullsinid | OK | 2s 825ms |
/parquet/datatypes/pandasdecimal | OK | 3s 62ms |
/parquet/datatypes/pandasdecimaldate | OK | 3s 322ms |
/parquet/query/compression type/=NONE /select from mergetree table into file | OK | 6m 33s |
/parquet/query/compression type/=LZ4 /select from mergetree table into file | OK | 6m 30s |
/parquet/datatypes/parquetgo | OK | 2s 499ms |
/parquet/query/compression type/=GZIP /select from mergetree table into file | OK | 6m 31s |
/parquet/datatypes/selectdatewithfilter | OK | 48s 29ms |
/parquet/datatypes/singlenull | OK | 2s 657ms |
/parquet/datatypes/sparkv21 | OK | 3s 363ms |
/parquet/datatypes/sparkv22 | OK | 3s 34ms |
/parquet/datatypes/statdecimal | OK | 2s 589ms |
/parquet/datatypes/string | OK | 3s 13ms |
/parquet/datatypes/string int list inconsistent offset multiple batches | OK | 19s 337ms |
/parquet/datatypes/stringtypes | OK | 5s 163ms |
/parquet/datatypes/struct | OK | 2s 793ms |
/parquet/datatypes/supporteduuid | OK | 2s 959ms |
/parquet/datatypes/timestamp1 | OK | 2s 315ms |
/parquet/datatypes/timestamp2 | OK | 2s 591ms |
/parquet/datatypes/timezone | OK | 2s 937ms |
/parquet/datatypes/unsigned | OK | 5s 228ms |
/parquet/datatypes/unsupportednull | OK | 986ms |
/parquet/complex | OK | 1m 1s |
/parquet/complex/arraystring | OK | 2s 730ms |
/parquet/complex/big tuple with nulls | OK | 2s 734ms |
/parquet/complex/bytearraydictionary | OK | 2s 728ms |
/parquet/complex/complex null | OK | 2s 886ms |
/parquet/complex/lagemap | OK | 2s 668ms |
/parquet/complex/largenestedarray | OK | 3s 6ms |
/parquet/complex/largestruct | OK | 2s 554ms |
/parquet/complex/largestruct2 | OK | 3s 11ms |
/parquet/complex/largestruct3 | OK | 2s 581ms |
/parquet/complex/list | OK | 2s 727ms |
/parquet/complex/nested array | OK | 2s 830ms |
/parquet/complex/nested map | OK | 2s 533ms |
/parquet/complex/nestedallcomplex | OK | 3s 254ms |
/parquet/complex/nestedarray2 | OK | 2s 984ms |
/parquet/complex/nestedstruct | OK | 3s 488ms |
/parquet/complex/nestedstruct2 | OK | 3s 181ms |
/parquet/complex/nestedstruct3 | OK | 3s 951ms |
/parquet/complex/nestedstruct4 | OK | 4s 762ms |
/parquet/complex/tupleofnulls | OK | 3s 765ms |
/parquet/complex/tuplewithdatetime | OK | 3s 77ms |
/parquet/cache | OK | 6s 860ms |
/parquet/cache/cache1 | OK | 3s 140ms |
/parquet/cache/cache2 | OK | 3s 656ms |
/parquet/glob | OK | 1m 38s |
/parquet/glob/fastparquet globs | OK | 24s 285ms |
/parquet/glob/glob1 | OK | 4s 956ms |
/parquet/glob/glob2 | OK | 5s 152ms |
/parquet/glob/glob with multiple elements | OK | 1s 6ms |
/parquet/glob/million extensions | OK | 1m 3s |
/parquet/rowgroups | OK | 6s 575ms |
/parquet/rowgroups/manyrowgroups | OK | 2s 832ms |
/parquet/rowgroups/manyrowgroups2 | OK | 3s 719ms |
/parquet/encrypted | Skip | 36ms |
/parquet/fastparquet | OK | 190ms |
/parquet/fastparquet/airlines | Skip | 11ms |
/parquet/fastparquet/baz | Skip | 12ms |
/parquet/fastparquet/empty date | Skip | 7ms |
/parquet/fastparquet/evo | Skip | 14ms |
/parquet/fastparquet/fastparquet | Skip | 8ms |
/parquet/read and write | OK | 27m 33s |
/parquet/read and write/read and write parquet file | OK | 27m 33s |
/parquet/query/compression type/=LZ4 /select from replicated mergetree table into file | OK | 5m 2s |
/parquet/query/compression type/=NONE /select from replicated mergetree table into file | OK | 5m 1s |
/parquet/query/compression type/=GZIP /select from replicated mergetree table into file | OK | 5m 2s |
/parquet/column related errors | OK | 2s 827ms |
/parquet/column related errors/check error with 500 columns | OK | 2s 825ms |
/parquet/multi chunk upload | Skip | 4ms |
/parquet/query/compression type/=NONE /select from distributed table into file | OK | 6m 16s |
/parquet/query/compression type/=LZ4 /select from distributed table into file | OK | 6m 17s |
/parquet/query/compression type/=GZIP /select from distributed table into file | OK | 6m 16s |
/parquet/query/compression type/=NONE /select from mat view into file | OK | 4m 49s |
/parquet/query/compression type/=LZ4 /select from mat view into file | OK | 4m 49s |
/parquet/query/compression type/=GZIP /select from mat view into file | OK | 4m 46s |
/parquet/query/compression type/=GZIP /insert into table with projection from file | OK | 1m 46s |
/parquet/query/compression type/=NONE /insert into table with projection from file | OK | 1m 45s |
/parquet/query/compression type/=LZ4 /insert into table with projection from file | OK | 1m 45s |
Generated by TestFlows Open-Source Test Framework v2.0.250110.1002922