Copyright 2025, Altinity Inc.. All Rights Reserved. All information contained herein is, and remains the property of Altinity Inc.. Any dissemination of this information or reproduction of this material is strictly forbidden unless prior written permission is obtained from Altinity Inc..
Date | Jul 28, 2025 9:46 |
Duration | 50m 59s |
Framework | TestFlows 2.0.250110.1002922 |
Test artifacts can be found at https://altinity-build-artifacts.s3.amazonaws.com/index.html#0/d32d0074004db61e346611c777e26532a456fe2f/regression/aarch64/with_analyzer/zookeeper/without_thread_fuzzer/parquet/
project | Altinity/ClickHouse |
project.id | 159717931 |
package | https://s3.amazonaws.com/altinity-build-artifacts/25.3/d32d0074004db61e346611c777e26532a456fe2f/package_aarch64/clickhouse-common-static_25.3.6.10034.altinitystable_arm64.deb |
version | 25.3.6.10034.altinitystable |
user.name | zvonand |
repository | https://github.com/Altinity/clickhouse-regression |
commit.hash | 5723e20cbc49b347114c7b90c7316a44dafa5328 |
job.name | Parquet |
job.retry | 1 |
job.url | https://github.com/Altinity/ClickHouse/actions/runs/16564472800 |
arch | aarch64 |
local | True |
clickhouse_version | None |
clickhouse_path | https://s3.amazonaws.com/altinity-build-artifacts/25.3/d32d0074004db61e346611c777e26532a456fe2f/package_aarch64/clickhouse-common-static_25.3.6.10034.altinitystable_arm64.deb |
as_binary | False |
base_os | None |
keeper_path | None |
zookeeper_version | None |
use_keeper | False |
stress | False |
collect_service_logs | True |
thread_fuzzer | False |
with_analyzer | True |
reuse_env | False |
storages | None |
minio_uri | Secret(name='minio_uri') |
minio_root_user | Secret(name='minio_root_user') |
minio_root_password | Secret(name='minio_root_password') |
aws_s3_bucket | None |
aws_s3_region | Secret(name='aws_s3_region') |
aws_s3_key_id | Secret(name='aws_s3_key_id') |
aws_s3_access_key | Secret(name='aws_s3_access_key') |
gcs_uri | None |
gcs_key_id | None |
gcs_key_secret | None |
azure_account_name | None |
azure_storage_key | None |
azure_container | None |
native_parquet_reader | False |
stress_bloom | False |
Units | Skip | OK | Fail | XFail | |
---|---|---|---|---|---|
Modules | |||||
Suites | |||||
Features | |||||
Scenarios | |||||
Checks | |||||
Examples | |||||
Steps |
Test Name | Result | Message |
---|---|---|
/parquet/postgresql/compression type/=GZIP /postgresql engine to parquet file to postgresql engine | XFail 16s 242ms This fails because of the difference in snapshot values. We used to capture the datetime value `0` be converted as 2106-02-07 06:28:16 instead of the correct 1970-01-01 01:00:00. But when steps are repeated manually, we can not reproduce it | AssertionError Traceback (most recent call last): File "/usr/lib/python3.12/threading.py", line 1030, in _bootstrap self._bootstrap_inner() File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner self.run() File "/usr/lib/python3.12/threading.py", line 1010, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.12/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py", line 828, in execute_query_step execute_query( File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py", line 898, in execute_query assert that(snapshot_result), error() ^^^^^^^^^^^^^^^^^^^^^ AssertionError: Oops! Assertion failed The following assertion was not satisfied assert that(snapshot_result), error() Assertion values assert that(snapshot_result), error() ^ is = SnapshotError( filename=/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/snapshots/common.py.postgresql engine to parquet file to postgresql engine.snapshot name=_parquet_postgresql_compression_type__GZIP__postgresql_engine_to_parquet_file_to_postgresql_engine_I_check_the_data_on_the_table_datetime snapshot_value=""" {"datetime":"2106-02-07 06:28:15","toTypeName(datetime)":"DateTime"} {"datetime":"2106-02-07 06:28:16","toTypeName(datetime)":"DateTime"} {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"2006-11-27 02:50:49","toTypeName(datetime)":"DateTime"} {"datetime":"2015-06-29 09:43:07","toTypeName(datetime)":"DateTime"} {"datetime":"2011-09-30 02:34:30","toTypeName(datetime)":"DateTime"} {"datetime":"2019-04-27 02:01:34","toTypeName(datetime)":"DateTime"} {"datetime":"2017-02-11 09:09:25","toTypeName(datetime)":"DateTime"} {"datetime":"2009-06-14 11:12:39","toTypeName(datetime)":"DateTime"} {"datetime":"2020-01-24 15:10:50","toTypeName(datetime)":"DateTime"} """, actual_value=""" {"datetime":"2106-02-07 06:28:15","toTypeName(datetime)":"DateTime"} {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"2006-11-27 02:50:49","toTypeName(datetime)":"DateTime"} {"datetime":"2015-06-29 09:43:07","toTypeName(datetime)":"DateTime"} {"datetime":"2011-09-30 02:34:30","toTypeName(datetime)":"DateTime"} {"datetime":"2019-04-27 02:01:34","toTypeName(datetime)":"DateTime"} {"datetime":"2017-02-11 09:09:25","toTypeName(datetime)":"DateTime"} {"datetime":"2009-06-14 11:12:39","toTypeName(datetime)":"DateTime"} {"datetime":"2020-01-24 15:10:50","toTypeName(datetime)":"DateTime"} """, diff=""" --- /home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/snapshots/common.py.postgresql engine to parquet file to postgresql engine.snapshot +++ @@ -1,6 +1,6 @@ {"datetime":"2106-02-07 06:28:15","toTypeName(datetime)":"DateTime"} -{"datetime":"2106-02-07 06:28:16","toTypeName(datetime)":"DateTime"} +{"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"2006-11-27 02:50:49","toTypeName(datetime)":"DateTime"} {"datetime":"2015-06-29 09:43:07","toTypeName(datetime)":"DateTime"} """) assert that(snapshot_result), error() ^ is False Where File '/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py', line 898 in 'execute_query' 890\| with values() as that: 891\| snapshot_result = snapshot( 892\| "\n" + r.output.strip() + "\n", 893\| id=snapshot_id, 894\| name=snapshot_name, 895\| encoder=str, 896\| mode=snapshot.CHECK, 897\| ) 898\|> assert that(snapshot_result), error() |
/parquet/postgresql/compression type/=LZ4 /postgresql engine to parquet file to postgresql engine | XFail 16s 320ms This fails because of the difference in snapshot values. We used to capture the datetime value `0` be converted as 2106-02-07 06:28:16 instead of the correct 1970-01-01 01:00:00. But when steps are repeated manually, we can not reproduce it | AssertionError Traceback (most recent call last): File "/usr/lib/python3.12/threading.py", line 1030, in _bootstrap self._bootstrap_inner() File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner self.run() File "/usr/lib/python3.12/threading.py", line 1010, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.12/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py", line 828, in execute_query_step execute_query( File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py", line 898, in execute_query assert that(snapshot_result), error() ^^^^^^^^^^^^^^^^^^^^^ AssertionError: Oops! Assertion failed The following assertion was not satisfied assert that(snapshot_result), error() Assertion values assert that(snapshot_result), error() ^ is = SnapshotError( filename=/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/snapshots/common.py.postgresql engine to parquet file to postgresql engine.snapshot name=_parquet_postgresql_compression_type__LZ4__postgresql_engine_to_parquet_file_to_postgresql_engine_I_check_the_data_on_the_table_datetime snapshot_value=""" {"datetime":"2106-02-07 06:28:15","toTypeName(datetime)":"DateTime"} {"datetime":"2106-02-07 06:28:16","toTypeName(datetime)":"DateTime"} {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"2006-11-27 02:50:49","toTypeName(datetime)":"DateTime"} {"datetime":"2015-06-29 09:43:07","toTypeName(datetime)":"DateTime"} {"datetime":"2011-09-30 02:34:30","toTypeName(datetime)":"DateTime"} {"datetime":"2019-04-27 02:01:34","toTypeName(datetime)":"DateTime"} {"datetime":"2017-02-11 09:09:25","toTypeName(datetime)":"DateTime"} {"datetime":"2009-06-14 11:12:39","toTypeName(datetime)":"DateTime"} {"datetime":"2020-01-24 15:10:50","toTypeName(datetime)":"DateTime"} """, actual_value=""" {"datetime":"2106-02-07 06:28:15","toTypeName(datetime)":"DateTime"} {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"2006-11-27 02:50:49","toTypeName(datetime)":"DateTime"} {"datetime":"2015-06-29 09:43:07","toTypeName(datetime)":"DateTime"} {"datetime":"2011-09-30 02:34:30","toTypeName(datetime)":"DateTime"} {"datetime":"2019-04-27 02:01:34","toTypeName(datetime)":"DateTime"} {"datetime":"2017-02-11 09:09:25","toTypeName(datetime)":"DateTime"} {"datetime":"2009-06-14 11:12:39","toTypeName(datetime)":"DateTime"} {"datetime":"2020-01-24 15:10:50","toTypeName(datetime)":"DateTime"} """, diff=""" --- /home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/snapshots/common.py.postgresql engine to parquet file to postgresql engine.snapshot +++ @@ -1,6 +1,6 @@ {"datetime":"2106-02-07 06:28:15","toTypeName(datetime)":"DateTime"} -{"datetime":"2106-02-07 06:28:16","toTypeName(datetime)":"DateTime"} +{"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"2006-11-27 02:50:49","toTypeName(datetime)":"DateTime"} {"datetime":"2015-06-29 09:43:07","toTypeName(datetime)":"DateTime"} """) assert that(snapshot_result), error() ^ is False Where File '/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py', line 898 in 'execute_query' 890\| with values() as that: 891\| snapshot_result = snapshot( 892\| "\n" + r.output.strip() + "\n", 893\| id=snapshot_id, 894\| name=snapshot_name, 895\| encoder=str, 896\| mode=snapshot.CHECK, 897\| ) 898\|> assert that(snapshot_result), error() |
/parquet/postgresql/compression type/=NONE /postgresql engine to parquet file to postgresql engine | XFail 19s 615ms This fails because of the difference in snapshot values. We used to capture the datetime value `0` be converted as 2106-02-07 06:28:16 instead of the correct 1970-01-01 01:00:00. But when steps are repeated manually, we can not reproduce it | AssertionError Traceback (most recent call last): File "/usr/lib/python3.12/threading.py", line 1030, in _bootstrap self._bootstrap_inner() File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner self.run() File "/usr/lib/python3.12/threading.py", line 1010, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.12/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py", line 828, in execute_query_step execute_query( File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py", line 898, in execute_query assert that(snapshot_result), error() ^^^^^^^^^^^^^^^^^^^^^ AssertionError: Oops! Assertion failed The following assertion was not satisfied assert that(snapshot_result), error() Assertion values assert that(snapshot_result), error() ^ is = SnapshotError( filename=/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/snapshots/common.py.postgresql engine to parquet file to postgresql engine.snapshot name=_parquet_postgresql_compression_type__NONE__postgresql_engine_to_parquet_file_to_postgresql_engine_I_check_the_data_on_the_table_datetime snapshot_value=""" {"datetime":"2106-02-07 06:28:15","toTypeName(datetime)":"DateTime"} {"datetime":"2106-02-07 06:28:16","toTypeName(datetime)":"DateTime"} {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"2006-11-27 02:50:49","toTypeName(datetime)":"DateTime"} {"datetime":"2015-06-29 09:43:07","toTypeName(datetime)":"DateTime"} {"datetime":"2011-09-30 02:34:30","toTypeName(datetime)":"DateTime"} {"datetime":"2019-04-27 02:01:34","toTypeName(datetime)":"DateTime"} {"datetime":"2017-02-11 09:09:25","toTypeName(datetime)":"DateTime"} {"datetime":"2009-06-14 11:12:39","toTypeName(datetime)":"DateTime"} {"datetime":"2020-01-24 15:10:50","toTypeName(datetime)":"DateTime"} """, actual_value=""" {"datetime":"2106-02-07 06:28:15","toTypeName(datetime)":"DateTime"} {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"2006-11-27 02:50:49","toTypeName(datetime)":"DateTime"} {"datetime":"2015-06-29 09:43:07","toTypeName(datetime)":"DateTime"} {"datetime":"2011-09-30 02:34:30","toTypeName(datetime)":"DateTime"} {"datetime":"2019-04-27 02:01:34","toTypeName(datetime)":"DateTime"} {"datetime":"2017-02-11 09:09:25","toTypeName(datetime)":"DateTime"} {"datetime":"2009-06-14 11:12:39","toTypeName(datetime)":"DateTime"} {"datetime":"2020-01-24 15:10:50","toTypeName(datetime)":"DateTime"} """, diff=""" --- /home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/snapshots/common.py.postgresql engine to parquet file to postgresql engine.snapshot +++ @@ -1,6 +1,6 @@ {"datetime":"2106-02-07 06:28:15","toTypeName(datetime)":"DateTime"} -{"datetime":"2106-02-07 06:28:16","toTypeName(datetime)":"DateTime"} +{"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"} {"datetime":"2006-11-27 02:50:49","toTypeName(datetime)":"DateTime"} {"datetime":"2015-06-29 09:43:07","toTypeName(datetime)":"DateTime"} """) assert that(snapshot_result), error() ^ is False Where File '/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py', line 898 in 'execute_query' 890\| with values() as that: 891\| snapshot_result = snapshot( 892\| "\n" + r.output.strip() + "\n", 893\| id=snapshot_id, 894\| name=snapshot_name, 895\| encoder=str, 896\| mode=snapshot.CHECK, 897\| ) 898\|> assert that(snapshot_result), error() |
/parquet/chunked array | XFail 15s 722ms Not supported | AssertionError Traceback (most recent call last): File "/usr/lib/python3.12/threading.py", line 1030, in _bootstrap self._bootstrap_inner() File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner self.run() File "/usr/lib/python3.12/threading.py", line 1010, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.12/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/chunked_array.py", line 30, in feature node.query( File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../helpers/cluster.py", line 1195, in query assert False, error(r.output) ^^^^^ AssertionError: Oops! Assertion failed The following assertion was not satisfied assert False, error(r.output) Description Error on processing query: Code: 33. DB::Exception: Error while reading Parquet data: NotImplemented: Nested data conversions not implemented for chunked array outputs: (in file/uri /var/lib/clickhouse/user_files/chunked_array_test_file.parquet): While executing ParquetBlockInputFormat: While executing File: data for INSERT was parsed from file. (CANNOT_READ_ALL_DATA) (version 25.3.6.10034.altinitystable (altinity build)) (query: INSERT INTO table_67c11ca0_6b9b_11f0_b101_9200064815d0 FROM INFILE '/var/lib/clickhouse/user_files/chunked_array_test_file.parquet' FORMAT Parquet ) Assertion values assert False, error(r.output) ^ is False Where File '/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../helpers/cluster.py', line 1195 in 'query' 1187\| assert message in r.output, error(r.output) 1188\| 1189\| if not ignore_exception: 1190\| if message is None or "Exception:" not in message: 1191\| with Then("check if output has exception") if steps else NullStep(): 1192\| if "Exception:" in r.output: 1193\| if raise_on_exception: 1194\| raise QueryRuntimeException(r.output) 1195\|> assert False, error(r.output) 1196\| 1197\| return r 1198\| |
/parquet/datatypes/float16 | XFail 717ms ClickHouse does not import FLOAT16 properly | AssertionError Traceback (most recent call last): File "/usr/lib/python3.12/threading.py", line 1030, in _bootstrap self._bootstrap_inner() File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner self.run() File "/usr/lib/python3.12/threading.py", line 1010, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.12/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/datatypes.py", line 929, in feature scenario() File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/datatypes.py", line 113, in float16 assert output == expected, error() ^^^^^^^^^^^^^^^^^^ AssertionError: Oops! Assertion failed The following assertion was not satisfied assert output == expected, error() Assertion values assert output == expected, error() ^ is '[-2,-1,0,1,2,3,4,5,6,7,8]' assert output == expected, error() ^ is '[-2,-1,0,1,2,3,4,5,6,7,8,9]' assert output == expected, error() ^ is = False @@ -1 +1 @@ -[-2,-1,0,1,2,3,4,5,6,7,8] +[-2,-1,0,1,2,3,4,5,6,7,8,9] assert output == expected, error() ^ is False Where File '/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/datatypes.py', line 113 in 'float16' 105\| ORDER BY tuple() AS SELECT floatfield FROM file('{import_file}', Parquet) 106\| """ 107\| ) 108\| 109\| with Then("I read the contents of the created table"): 110\| output = node.query( 111\| f"SELECT groupArray(round(*)) FROM {table_name} FORMAT TSV" 112\| ).output 113\|> assert output == expected, error() 114\| 115\| finally: 116\| with Finally("I drop the table"): |
/parquet/datatypes/large string map | XFail 6s 277ms Will fail until the, https://github.com/apache/arrow/pull/35825, gets merged. | AssertionError Traceback (most recent call last): File "/usr/lib/python3.12/threading.py", line 1030, in _bootstrap self._bootstrap_inner() File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner self.run() File "/usr/lib/python3.12/threading.py", line 1010, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.12/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/datatypes.py", line 929, in feature scenario() File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/datatypes.py", line 801, in large_string_map import_export(snapshot_name="large_string_map_structure", import_file=import_file) File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/outline.py", line 36, in import_export node.query( File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../helpers/cluster.py", line 1195, in query assert False, error(r.output) ^^^^^ AssertionError: Oops! Assertion failed The following assertion was not satisfied assert False, error(r.output) Description Received exception from server (version 25.3.6): Code: 33. DB::Exception: Received from localhost:9000. DB::Exception: Error while reading Parquet data: NotImplemented: Nested data conversions not implemented for chunked array outputs: (in file/uri /var/lib/clickhouse/user_files/arrow/large_string_map.brotli.parquet): While executing ParquetBlockInputFormat: While executing File. (CANNOT_READ_ALL_DATA) (query: CREATE TABLE table_bef6661c_6b9b_11f0_a563_9200064815d0 ENGINE = MergeTree ORDER BY tuple() AS SELECT * FROM file('arrow/large_string_map.brotli.parquet', Parquet) LIMIT 100 FORMAT TabSeparated ) Assertion values assert False, error(r.output) ^ is False Where File '/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../helpers/cluster.py', line 1195 in 'query' 1187\| assert message in r.output, error(r.output) 1188\| 1189\| if not ignore_exception: 1190\| if message is None or "Exception:" not in message: 1191\| with Then("check if output has exception") if steps else NullStep(): 1192\| if "Exception:" in r.output: 1193\| if raise_on_exception: 1194\| raise QueryRuntimeException(r.output) 1195\|> assert False, error(r.output) 1196\| 1197\| return r 1198\| |
Test Name | Result | Duration |
---|---|---|
/parquet | OK | 50m 59s |
/parquet/file | OK | 32m 35s |
/parquet/file/engine | OK | 32m 35s |
/parquet/file/engine/insert into engine | OK | 19m 34s |
/parquet/file/function | OK | 15m 18s |
/parquet/file/function/insert into function manual cast types | OK | 14m 51s |
/parquet/query | OK | 43m 4s |
/parquet/query/compression type | OK | 43m 4s |
/parquet/file/engine/select from engine | OK | 8m 29s |
/parquet/file/function/insert into function auto cast types | OK | 15m 18s |
/parquet/file/engine/engine to file to engine | OK | 27m 48s |
/parquet/file/function/select from function manual cast types | OK | 9m 14s |
/parquet/query/compression type/=NONE | OK | 43m 3s |
/parquet/query/compression type/=NONE /insert into memory table from file | OK | 8m 14s |
/parquet/file/engine/insert into engine from file | OK | 18m 55s |
/parquet/query/compression type/=LZ4 | OK | 43m 3s |
/parquet/list in multiple chunks | OK | 29s 918ms |
/parquet/file/engine/engine select output to file | OK | 32m 35s |
/parquet/file/function/select from function auto cast types | OK | 8m 33s |
/parquet/query/compression type/=GZIP | OK | 43m 4s |
/parquet/url | OK | 33m 37s |
/parquet/query/compression type/=LZ4 /insert into memory table from file | OK | 8m 18s |
/parquet/query/compression type/=GZIP /insert into memory table from file | OK | 8m 19s |
/parquet/url/engine | OK | 32m 53s |
/parquet/url/function | OK | 16m 4s |
/parquet/url/engine/insert into engine | OK | 20m 5s |
/parquet/url/engine/select from engine | OK | 8m 30s |
/parquet/url/function/insert into function | OK | 14m 47s |
/parquet/url/engine/engine to file to engine | OK | 28m 0s |
/parquet/url/function/select from function manual cast types | OK | 16m 4s |
/parquet/url/function/select from function auto cast types | OK | 14m 24s |
/parquet/url/engine/insert into engine from file | OK | 26m 45s |
/parquet/url/engine/engine select output to file | OK | 32m 52s |
/parquet/mysql | OK | 49s 951ms |
/parquet/mysql/compression type | OK | 49s 873ms |
/parquet/mysql/compression type/=NONE | OK | 47s 248ms |
/parquet/mysql/compression type/=NONE /mysql engine to parquet file to mysql engine | OK | 27s 705ms |
/parquet/mysql/compression type/=GZIP | OK | 49s 227ms |
/parquet/mysql/compression type/=LZ4 | OK | 49s 847ms |
/parquet/mysql/compression type/=GZIP /mysql engine to parquet file to mysql engine | OK | 28s 537ms |
/parquet/mysql/compression type/=LZ4 /mysql engine to parquet file to mysql engine | OK | 29s 672ms |
/parquet/mysql/compression type/=NONE /mysql function to parquet file to mysql function | OK | 19s 485ms |
/parquet/mysql/compression type/=GZIP /mysql function to parquet file to mysql function | OK | 20s 584ms |
/parquet/mysql/compression type/=LZ4 /mysql function to parquet file to mysql function | OK | 20s 171ms |
/parquet/postgresql | OK | 43s 953ms |
/parquet/postgresql/compression type | OK | 43s 874ms |
/parquet/postgresql/compression type/=NONE | OK | 43s 791ms |
/parquet/postgresql/compression type/=GZIP | OK | 39s 377ms |
/parquet/postgresql/compression type/=LZ4 | OK | 39s 12ms |
/parquet/postgresql/compression type/=GZIP /postgresql engine to parquet file to postgresql engine | XFail | 16s 242ms |
/parquet/postgresql/compression type/=LZ4 /postgresql engine to parquet file to postgresql engine | XFail | 16s 320ms |
/parquet/postgresql/compression type/=NONE /postgresql engine to parquet file to postgresql engine | XFail | 19s 615ms |
/parquet/postgresql/compression type/=GZIP /postgresql function to parquet file to postgresql function | OK | 22s 939ms |
/parquet/postgresql/compression type/=LZ4 /postgresql function to parquet file to postgresql function | OK | 22s 464ms |
/parquet/postgresql/compression type/=NONE /postgresql function to parquet file to postgresql function | OK | 23s 943ms |
/parquet/remote | OK | 21m 16s |
/parquet/remote/compression type | OK | 21m 16s |
/parquet/remote/compression type/=NONE | OK | 21m 12s |
/parquet/remote/compression type/=GZIP | OK | 21m 13s |
/parquet/remote/compression type/=LZ4 | OK | 21m 16s |
/parquet/remote/compression type/=NONE /outline | OK | 21m 12s |
/parquet/remote/compression type/=GZIP /outline | OK | 21m 13s |
/parquet/remote/compression type/=LZ4 /outline | OK | 21m 16s |
/parquet/remote/compression type/=NONE /outline/insert into function | OK | 7m 51s |
/parquet/remote/compression type/=GZIP /outline/insert into function | OK | 7m 51s |
/parquet/remote/compression type/=LZ4 /outline/insert into function | OK | 7m 53s |
/parquet/query/compression type/=NONE /insert into mergetree table from file | OK | 5m 55s |
/parquet/query/compression type/=LZ4 /insert into mergetree table from file | OK | 5m 54s |
/parquet/query/compression type/=GZIP /insert into mergetree table from file | OK | 5m 51s |
/parquet/remote/compression type/=GZIP /outline/select from function | OK | 13m 21s |
/parquet/remote/compression type/=NONE /outline/select from function | OK | 13m 21s |
/parquet/remote/compression type/=LZ4 /outline/select from function | OK | 13m 23s |
/parquet/query/compression type/=NONE /insert into replicated mergetree table from file | OK | 4m 27s |
/parquet/query/compression type/=GZIP /insert into replicated mergetree table from file | OK | 4m 26s |
/parquet/query/compression type/=LZ4 /insert into replicated mergetree table from file | OK | 4m 30s |
/parquet/query/compression type/=NONE /insert into distributed table from file | OK | 3m 30s |
/parquet/query/compression type/=GZIP /insert into distributed table from file | OK | 3m 30s |
/parquet/query/compression type/=LZ4 /insert into distributed table from file | OK | 3m 27s |
/parquet/query/compression type/=GZIP /select from memory table into file | OK | 6m 22s |
/parquet/query/compression type/=NONE /select from memory table into file | OK | 6m 20s |
/parquet/query/compression type/=LZ4 /select from memory table into file | OK | 6m 20s |
/parquet/chunked array | XFail | 15s 722ms |
/parquet/broken | OK | 550ms |
/parquet/broken/file | Skip | 7ms |
/parquet/broken/read broken bigint | Skip | 15ms |
/parquet/broken/read broken date | Skip | 37ms |
/parquet/broken/read broken int | Skip | 18ms |
/parquet/broken/read broken smallint | Skip | 20ms |
/parquet/broken/read broken timestamp ms | Skip | 2ms |
/parquet/broken/read broken timestamp us | Skip | 34ms |
/parquet/broken/read broken tinyint | Skip | 19ms |
/parquet/broken/read broken ubigint | Skip | 20ms |
/parquet/broken/read broken uint | Skip | 21ms |
/parquet/broken/read broken usmallint | Skip | 51ms |
/parquet/broken/read broken utinyint | Skip | 25ms |
/parquet/broken/string | Skip | 32ms |
/parquet/encoding | OK | 17s 842ms |
/parquet/encoding/deltabytearray1 | OK | 3s 107ms |
/parquet/encoding/deltabytearray2 | OK | 2s 695ms |
/parquet/encoding/deltalengthbytearray | OK | 2s 23ms |
/parquet/encoding/dictionary | OK | 2s 302ms |
/parquet/encoding/plain | OK | 2s 248ms |
/parquet/encoding/plainrlesnappy | OK | 3s 263ms |
/parquet/encoding/rleboolean | OK | 2s 131ms |
/parquet/compression | OK | 48s 422ms |
/parquet/compression/arrow snappy | OK | 2s 306ms |
/parquet/compression/brotli | OK | 2s 219ms |
/parquet/compression/gzippages | OK | 4s 605ms |
/parquet/compression/largegzip | OK | 2s 458ms |
/parquet/compression/lz4 hadoop | OK | 2s 446ms |
/parquet/compression/lz4 hadoop large | OK | 2s 241ms |
/parquet/compression/lz4 non hadoop | OK | 2s 401ms |
/parquet/compression/lz4 raw | OK | 2s 38ms |
/parquet/compression/lz4 raw large | OK | 2s 350ms |
/parquet/compression/lz4pages | OK | 4s 413ms |
/parquet/compression/nonepages | OK | 4s 414ms |
/parquet/compression/snappypages | OK | 4s 688ms |
/parquet/compression/snappyplain | OK | 2s 331ms |
/parquet/compression/snappyrle | OK | 2s 298ms |
/parquet/compression/zstd | OK | 2s 249ms |
/parquet/compression/zstdpages | OK | 4s 782ms |
/parquet/datatypes | OK | 3m 49s |
/parquet/datatypes/arrowtimestamp | OK | 2s 50ms |
/parquet/datatypes/arrowtimestampms | OK | 2s 116ms |
/parquet/datatypes/binary | OK | 2s 426ms |
/parquet/datatypes/binary string | OK | 2s 140ms |
/parquet/datatypes/blob | OK | 2s 138ms |
/parquet/datatypes/boolean | OK | 2s 455ms |
/parquet/datatypes/byte array | OK | 2s 152ms |
/parquet/datatypes/columnname | OK | 2s 299ms |
/parquet/datatypes/columnwithnull | OK | 2s 396ms |
/parquet/datatypes/columnwithnull2 | OK | 2s 444ms |
/parquet/datatypes/date | OK | 2s 186ms |
/parquet/datatypes/decimal with filter | OK | 2s 728ms |
/parquet/datatypes/decimalvariousfilters | OK | 2s 318ms |
/parquet/datatypes/decimalwithfilter2 | OK | 2s 56ms |
/parquet/datatypes/enum | OK | 2s 821ms |
/parquet/datatypes/enum2 | OK | 2s 356ms |
/parquet/datatypes/fixed length decimal | OK | 2s 426ms |
/parquet/datatypes/fixed length decimal legacy | OK | 2s 145ms |
/parquet/datatypes/fixedstring | OK | 2s 412ms |
/parquet/datatypes/float16 | XFail | 717ms |
/parquet/datatypes/h2oai | OK | 2s 296ms |
/parquet/datatypes/hive | OK | 4s 548ms |
/parquet/datatypes/int32 | OK | 2s 476ms |
/parquet/datatypes/int32 decimal | OK | 2s 289ms |
/parquet/datatypes/int64 | OK | 2s 281ms |
/parquet/datatypes/int64 decimal | OK | 2s 524ms |
/parquet/datatypes/json | OK | 2s 245ms |
/parquet/datatypes/large string map | XFail | 6s 277ms |
/parquet/datatypes/largedouble | OK | 2s 943ms |
/parquet/datatypes/manydatatypes | OK | 2s 308ms |
/parquet/datatypes/manydatatypes2 | OK | 3s 422ms |
/parquet/datatypes/maps | OK | 2s 306ms |
/parquet/datatypes/nameswithemoji | OK | 2s 251ms |
/parquet/datatypes/nandouble | OK | 2s 555ms |
/parquet/datatypes/negativeint64 | OK | 3s 954ms |
/parquet/datatypes/nullbyte | OK | 2s 245ms |
/parquet/datatypes/nullbytemultiple | OK | 2s 162ms |
/parquet/datatypes/nullsinid | OK | 2s 546ms |
/parquet/datatypes/pandasdecimal | OK | 2s 350ms |
/parquet/datatypes/pandasdecimaldate | OK | 2s 282ms |
/parquet/datatypes/parquetgo | OK | 2s 430ms |
/parquet/datatypes/selectdatewithfilter | OK | 1m 35s |
/parquet/datatypes/singlenull | OK | 1s 792ms |
/parquet/datatypes/sparkv21 | OK | 2s 466ms |
/parquet/datatypes/sparkv22 | OK | 1s 344ms |
/parquet/datatypes/statdecimal | OK | 1s 379ms |
/parquet/datatypes/string | OK | 1s 876ms |
/parquet/datatypes/string int list inconsistent offset multiple batches | OK | 7s 279ms |
/parquet/datatypes/stringtypes | OK | 1s 797ms |
/parquet/datatypes/struct | OK | 1s 386ms |
/parquet/datatypes/supporteduuid | OK | 1s 930ms |
/parquet/datatypes/timestamp1 | OK | 1s 693ms |
/parquet/datatypes/timestamp2 | OK | 1s 739ms |
/parquet/datatypes/timezone | OK | 2s 65ms |
/parquet/datatypes/unsigned | OK | 3s 353ms |
/parquet/query/compression type/=NONE /select from mergetree table into file | OK | 4m 1s |
/parquet/query/compression type/=GZIP /select from mergetree table into file | OK | 4m 3s |
/parquet/query/compression type/=LZ4 /select from mergetree table into file | OK | 4m 1s |
/parquet/datatypes/unsupportednull | OK | 174ms |
/parquet/complex | OK | 34s 196ms |
/parquet/complex/arraystring | OK | 1s 172ms |
/parquet/complex/big tuple with nulls | OK | 1s 158ms |
/parquet/complex/bytearraydictionary | OK | 1s 185ms |
/parquet/complex/complex null | OK | 2s 104ms |
/parquet/complex/lagemap | OK | 2s 548ms |
/parquet/complex/largenestedarray | OK | 1s 762ms |
/parquet/complex/largestruct | OK | 1s 478ms |
/parquet/complex/largestruct2 | OK | 1s 905ms |
/parquet/complex/largestruct3 | OK | 1s 458ms |
/parquet/complex/list | OK | 1s 741ms |
/parquet/complex/nested array | OK | 1s 787ms |
/parquet/complex/nested map | OK | 1s 720ms |
/parquet/complex/nestedallcomplex | OK | 2s 94ms |
/parquet/complex/nestedarray2 | OK | 1s 902ms |
/parquet/complex/nestedstruct | OK | 1s 696ms |
/parquet/complex/nestedstruct2 | OK | 1s 586ms |
/parquet/complex/nestedstruct3 | OK | 1s 882ms |
/parquet/complex/nestedstruct4 | OK | 1s 854ms |
/parquet/complex/tupleofnulls | OK | 1s 707ms |
/parquet/complex/tuplewithdatetime | OK | 1s 405ms |
/parquet/cache | OK | 3s 414ms |
/parquet/cache/cache1 | OK | 1s 679ms |
/parquet/cache/cache2 | OK | 1s 692ms |
/parquet/glob | OK | 50s 649ms |
/parquet/glob/fastparquet globs | OK | 8s 291ms |
/parquet/glob/glob1 | OK | 2s 725ms |
/parquet/glob/glob2 | OK | 3s 112ms |
/parquet/glob/glob with multiple elements | OK | 550ms |
/parquet/glob/million extensions | OK | 35s 951ms |
/parquet/rowgroups | OK | 3s 404ms |
/parquet/rowgroups/manyrowgroups | OK | 1s 709ms |
/parquet/rowgroups/manyrowgroups2 | OK | 1s 633ms |
/parquet/encrypted | Skip | 1ms |
/parquet/fastparquet | OK | 29ms |
/parquet/fastparquet/airlines | Skip | 2ms |
/parquet/fastparquet/baz | Skip | 1ms |
/parquet/fastparquet/empty date | Skip | 3ms |
/parquet/fastparquet/evo | Skip | 3ms |
/parquet/fastparquet/fastparquet | Skip | 2ms |
/parquet/read and write | OK | 16m 26s |
/parquet/read and write/read and write parquet file | OK | 16m 26s |
/parquet/query/compression type/=NONE /select from replicated mergetree table into file | OK | 3m 4s |
/parquet/query/compression type/=LZ4 /select from replicated mergetree table into file | OK | 3m 4s |
/parquet/query/compression type/=GZIP /select from replicated mergetree table into file | OK | 3m 3s |
/parquet/column related errors | OK | 1s 844ms |
/parquet/column related errors/check error with 500 columns | OK | 1s 840ms |
/parquet/multi chunk upload | Skip | 3ms |
/parquet/query/compression type/=NONE /select from distributed table into file | OK | 3m 22s |
/parquet/query/compression type/=LZ4 /select from distributed table into file | OK | 3m 21s |
/parquet/query/compression type/=GZIP /select from distributed table into file | OK | 3m 22s |
/parquet/query/compression type/=NONE /select from mat view into file | OK | 2m 57s |
/parquet/query/compression type/=LZ4 /select from mat view into file | OK | 2m 56s |
/parquet/query/compression type/=GZIP /select from mat view into file | OK | 2m 56s |
/parquet/query/compression type/=NONE /insert into table with projection from file | OK | 1m 7s |
/parquet/query/compression type/=LZ4 /insert into table with projection from file | OK | 1m 7s |
/parquet/query/compression type/=GZIP /insert into table with projection from file | OK | 1m 6s |
Generated by TestFlows Open-Source Test Framework v2.0.250110.1002922