Redshift to SSAS issue

More
5 years 10 months ago #12031 by createg
createg created the topic: Redshift to SSAS issue
I am able to set up the data source using the provider and test connection.

Connection string in datasource designer after setting up :
Provider=PGNP.1;Data Source=*******Redshift server ********;Persist Security Info=True;Password=**********;User ID=************;Initial Catalog=********;Extended Properties="PORT=8192;SSL=allow;"

but when trying to create the data source view either the table is loaded with out any column names or it errors out with

Value does not fall within the expected range.


Explore data gives error:

ERROR: syntax error at or near "FROM"
LINE 3: FROM [dvbi_core].[d_atv_device_types]
^

#


Software details:
Desktop trial
Visual Studio 2010
Database:Redshift
Windows 7

Is there a solution to this, has any one had complete success in using this provider.

Please Log in or Create an account to join the conversation.

More
5 years 10 months ago #12311 by Moderator
Moderator replied the topic: Re: Redshift to SSAS issue
Hello,
would you run PGNPProfiler (as Administrator on Windows 7), and see if detailed error description is available in the trace?

If the details do not help figure out why the issue occurs, would you send the table DDL [dvbi_core].[d_atv_device_types], and the xmla file to our Support. We will try to reproduce the issue, and will help you resolve it.

To my best knowledge, other customers are using the Redshift OLEDB provider with SSAS successfully.

Thank you!

Please Log in or Create an account to join the conversation.

More
5 years 10 months ago #12432 by createg
createg replied the topic: Re: Redshift to SSAS issue
Here is the log:

One of the error on the :ERROR: Internal error: unsupported join type: 3

is caused by this "table_schema=[PGTYPE_NAME, dvbi_core] and table_name=[PGTYPE_NAME, d_atv_device_types]"

ran the query separately with "table_schema='dvbi_core' and table_name='d_atv_device_types'

and it works, not sure if this is what is causing the problem.

This log is collected during the creation of data source view.

Log

Executed SQL Parse:0.000 Prepare:0.000 Execute:77.745 GetRows:0.025 Rows:1
SELECT current_setting('search_path'), version();

Executed SQL Parse:0.000 Prepare:0.000 Execute:75.971 GetRows:0.003 Rows:12
SELECT implementation_info_name, integer_value, character_value FROM information_schema.sql_implementation_info;

Client SQL 2014/09/22 11:01:36.880 DF46B5C4
IDBSchemaRowset::GetRowset DBSCHEMA_TABLES
Executed SQL Parse:0.000 Prepare:0.000 Execute:236.359 GetRows:0.000 Rows:509
SELECT n.nspname AS table_schema, c.relname AS table_name, CASE WHEN relkind='r' THEN CASE WHEN n.nspname='pg_catalog' OR n.nspname='information_schema' THEN 'SYSTEM TABLE' ELSE 'TABLE' END ELSE CASE WHEN n.nspname='pg_catalog' OR n.nspname='information_schema' THEN 'SYSTEM VIEW' ELSE 'VIEW' END END AS table_type FROM pg_class c INNER JOIN pg_namespace n ON (c.relnamespace=n.oid) WHERE relkind IN ('r','v') ORDER BY table_type, table_schema, table_name

Client SQL 2014/09/22 11:01:37.124 7BE8A65A
IDBSchemaRowset::GetRowset(SCHEMA_FOREIGN_KEYS)
Executed SQL Parse:0.000 Prepare:0.000 Execute:82.211 GetRows:0.000 Rows:0
SELECT (SELECT n.nspname FROM pg_class c JOIN pg_namespace n ON c.relnamespace=n.oid where S.confrelid=c.oid) as PK_TABLE_SCHEMA, (SELECT C1.relname FROM pg_class C1 WHERE S.confrelid=C1.oid) AS PK_TABLE_NAME, (SELECT SS.conname FROM pg_constraint SS WHERE SS.conrelid=S.confrelid and SS.contype='p') as PK_NAME, T.schemaname as FK_TABLE_SCHEMA, C.relname as FK_TABLE_NAME, S.conname as FK_NAME, S.conkey as FK_INDKEY, S.confkey as PK_INDKEY, S.confupdtype as UPDATE_RULE, S.confdeltype as DELETE_RULE, S.condeferrable as DEFERRABLE, S.condeferred as DEFERRED FROM pg_constraint S INNER JOIN pg_class C ON (S.conrelid=C.oid) INNER JOIN pg_tables T ON (C.relname=T.tablename) WHERE contype='f'

Client SQL 2014/09/22 11:02:18.721 D1EAA01E
IDBSchemaRowset::GetRowset DBSCHEMA_TABLES,,?
Executed SQL Parse:0.000 Prepare:0.000 Execute:73.842 GetRows:0.000 Rows:1
SELECT * FROM (SELECT n.nspname AS table_schema, c.relname AS table_name, CASE WHEN relkind='r' THEN CASE WHEN n.nspname='pg_catalog' OR n.nspname='information_schema' THEN 'SYSTEM TABLE' ELSE 'TABLE' END ELSE CASE WHEN relkind='v' THEN CASE WHEN n.nspname='pg_catalog' OR n.nspname='information_schema' THEN 'SYSTEM VIEW' ELSE 'VIEW' END ELSE 'Z-SEQUENCE' END END AS table_type FROM pg_class c INNER JOIN pg_namespace n ON (c.relnamespace=n.oid) WHERE relkind IN ('r','v','S') ORDER BY table_type, table_schema, table_name) s WHERE table_name=[PGTYPE_NAME, d_atv_device_types]


Client SQL 2014/09/22 11:02:18.795 3532E5B7
SELECT * FROM (SELECT T.schemaname AS TABLE_SCHEMA, T.tablename AS TABLE_NAME, A.attname AS COLUMN_NAME, A.attnum AS ORDINAL_POSITION, A.atthasdef AS COLUMN_HASDEFAULT, A.atttypid AS DATA_TYPE, TY.typname AS TYPNAME, A.attnotnull AS NOT_NULL, A.attlen AS FIELD_LENGTH, A.atttypmod AS FIELD_MOD, D.adsrc AS COLUMN_DEFAULT FROM pg_attribute A INNER JOIN pg_class C on (A.attrelid=C.oid) INNER JOIN pg_tables T on (C.relname=T.tablename) INNER JOIN pg_namespace NS on (NS.oid=C.relnamespace and NS.nspname=T.schemaname) INNER JOIN pg_type TY on (TY.oid=A.atttypid) LEFT OUTER JOIN pg_attrdef D on (D.adrelid=C.oid and D.adnum=A.attnum) WHERE A.attnum>0 and A.attisdropped='f' UNION ALL SELECT T.schemaname AS TABLE_SCHEMA, T.viewname AS TABLE_NAME, A.attname AS COLUMN_NAME, A.attnum AS ORDINAL_POSITION, A.atthasdef AS COLUMN_HASDEFAULT, A.atttypid AS DATA_TYPE, TY.typname AS TYPNAME, A.attnotnull AS NOT_NULL, A.attlen AS FIELD_LENGTH, A.atttypmod AS FIELD_MOD, D.adsrc AS COLUMN_DEFAULT FROM pg_attribute A INNER JOIN pg_class C on (A.attrelid=C.oid) INNER JOIN pg_views T on (C.relname=T.viewname) INNER JOIN pg_namespace NS on (NS.oid=C.relnamespace and NS.nspname=T.schemaname) INNER JOIN pg_type TY on (TY.oid=A.atttypid) LEFT OUTER JOIN pg_attrdef D on (D.adrelid=C.oid and D.adnum=A.attnum) WHERE A.attnum>0 and A.attisdropped='f') s where table_schema=[PGTYPE_NAME, dvbi_core] and table_name=[PGTYPE_NAME, d_atv_device_types] ORDER BY table_schema, table_name, ordinal_position
Error
ERROR: Internal error: unsupported join type: 3
[/b]

Client SQL 2014/09/22 11:02:18.877 66BA9F72
ICommand::Prepare
Executed SQL Parse:0.000 Prepare:0.000 Execute:77.627 GetRows:0.000 Rows:0
PREPARE "2061c465f0ee766486ed56801d3cf36e" AS SELECT * FROM "dvbi_core"."d_atv_device_types"

Client SQL 2014/09/22 11:02:18.721 9C78E922

SELECT *
FROM [dvbi_core].[d_atv_device_types]

Please Log in or Create an account to join the conversation.

More
5 years 10 months ago #12481 by createg
createg replied the topic: Re: Redshift to SSAS issue
I re installed the provider and i got it working.
I mailed these to support also but thought i will post here as well for suggestions.

I noticed few issues in the provider, if a fix exists for point 1 in the near future it will be great.

1. On my desktop, hitting cancel while explore data is running or while cube processing is running results in the visual studio hanging and the computer freezes and the SSAS service also crashes(not tested on server)
2. Changing the sql server version to 2008 in the data links options results in application crash and restart(there is no option for 2012)
3. Changing the SSl mode to prefer results in application crash and restart
4. While processing an entire cube(600 million rows) and 8 dimension, one of the dimension(XYZ) has million rows, with settings, Max number of connections 10 ,parallel process - let server decide and default data links settings except connection time out 0
results in the system hanging at finished reading XYZ dimension, I have tested this same exact set up using oracle client and oracle database it works fine, not sure if this is redshift limitation or the providers, but some info on this would be great.

. Explore data runs select * from, resulting in a slow response if the data set is large and cancel cannot be performed.

How should the fetch rows number be used, the fact table i am trying to query has over 300million rows for a month and setting the partition query to only the month takes a very long time and no status is given in the progress while the process is running of how many rows is fetched.

What is most optimal setting for using redshift?

No of parallel process
No of fetch rows
Max number of connections?

And any other setting that will not crash the system, i have restarted my desktop like 20 times today.

Over all this is an awesome product, i am hoping these bug can be fixed, the point one may be just redshift issue not sure, other tools lets you cancel a query but sometimes it also crashes..

I have not been able to successfuly process the 600 million rows, i am going to try only the cube processing now and see it works, it has been running for an hours fingers crossed.

Ganesh

Please Log in or Create an account to join the conversation.

More
5 years 9 months ago #12512 by createg
createg replied the topic: Re: Redshift to SSAS issue
I am pleased to say the provider works.

Things to consider:
The provider works well when the system has more than 80gb of RAM and if you pulling more than 10million rows of data.

Ganesh

Please Log in or Create an account to join the conversation.

More
5 years 9 months ago #12598 by Moderator
Moderator replied the topic: Re: Redshift to SSAS issue
Enabling cursors with CURSOR=ON;MIN_FETCH_ROWS=5000; in the Extended Properties of the connection string may significantly reduce the memory consumption.

Please Log in or Create an account to join the conversation.

Time to create page: 0.075 seconds
Powered by Kunena Forum