Of course, this workaround assumes that no other parameters would be bound outside of the UNLOAD's query inside the ( ). ) Note The TO destination must specify a location in Amazon S3 that has no data. FROM oldtable) TO 's3://myathenadatalocation/myfolder/' WITH ( propertyname 'expression'. Subsequently, if the sub-query executed successfully without any errors or exceptions, we could assume that the sub-query is safe, thus allowing us to wrap the sub-query back into the UNLOAD parent statement, but this time replacing the bind parameters with actual user-supplied parameters (simply concatenating them), which have now been validated in the previously run SELECT query. Syntax The UNLOAD statement uses the following syntax. This would let us use Redshift's prepared statement support (which is indeed supported for SELECT queries) to bind and validate the potentially risky, user-supplied parameters first. While trying to devise a workaround for this, a colleague of mine has thought up a workaround: instead of binding the parameters into the UNLOAD query itself (which is not supported by Redshift), we could simply bind them to the inner sub-query inside the UNLOAD's ( ) first (which happens to be a SELECT query - which is probably the most common subquery used within UNLOAD statements by most Redshift users, I'd say) and run this sub-query first, perhaps with a LIMIT 1 or 1=0 condition, to limit its running time. it has made the value more precise by adding 4 more decimal places, and then (randomly) making the last decimal place a non-zero value. Thanks for your quick reply, and thanks for re-raising this issue with the Redshift server team. I have a table with a float4 field, and one of the values is 120.12 When I unload the data to a file in S3, and look at the file in a text editor, the related field now has the value 120.120003 I.e.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |