Using components: Snowflake Destination (beta)

Use the Snowflake Destination component to store the output of your package into Snowflake. This requires that you have an existing Snowflake account.

The Snowflake component is always the last component in a package.

To define the Snowflake destination:

  1. Add a Snowflake destination component at the end of your dataflow.
  2. Open the component and name it.

Destination settings

  1. Define the parameters for connecting to your Snowflake as follows (for more information, see Allow Xplenty access to my Snowflake account):
    • snowflake connection - either click the drop-down arrow and select an existing connection, or click create new to create a new connection.
    • warehouse - the warehouse to use to store the data into Snowflake. Leave empty to use the connection's default warehouse.
    • database - the database to store the data into. Leave empty to use the connection's default database.
    • target schema - the target table's schema. If empty, the default schema is used.
    • target table - the name of the target table in Snowflake.
    • create table - automatically create the table defined in the component if it doesn't exist. See below for more information.
    • add columns - automatically add columns that are mapped in the destination and do not exist in the table. See below for more information.
    • Operation type - the method of data insertion
      • truncate and insert - truncate the target table before the data flow executes.
      • delete and insert - deletes all of the target table before the data flow executes. If a truncate statement can't be executed on the target table due to permissions or other constraints, you can use this instead.
      • merge (delete and insert) - incoming data is merged with existing data in the table by deleting target table data that exists in both the data sets and then inserting all the incoming data into the target table. Requires setting the merge keys correctly in field mapping. Merge is done in a single transaction:
        1. The dataflow's output is copied into a temporary table with the same schema as the target table.
        2. Rows with keys that exist in the temporary table are deleted from the target table.
        3. All rows in the temporary table are inserted into the target table.
        4. temporary table is dropped.
      • merge (update and insert) - incoming data is merged with existing data in the table by updating existing data and inserting new data. Requires setting the merge keys correctly in field mapping. Merge is done in the following manner:
        1. The dataflow's output is copied into a temporary table with the same schema as the target table.
        2. Existing records (by key) in the target table are updated and new records are inserted using the MERGE statement.
        3. temporary table is dropped.
    • max errors - if loading data into Redshift returns fewer errors than this value, it continues without failing.
    • truncate columns - truncates string values in order for them to fit in the target column specification.
    • pre action sql - sql code to execute before inserting the data into the target table. If a merge operation is selected, the sql code is executed before the staging table is created.
    • post action sql - sql code to execute after inserting the data into the target table. If a merge operation is selected, the sql code is executed after the staging table is merged into the target table.
    • Then click the Test Connection button to help check that the connection is good and that the target table exists.

  2. Map fields from your dataflow to columns in the target table. You can click the green Auto-fill fields icon to automatically populate the list of fields and column names.
  3. If merge operation type is used, select the proper key columns. If append operation type is used, key selection can be ignored.

Automatically creating and altering destination table

Xplenty can automatically create the destination table for you if it doesn't exist and it can also append columns to an existing table. If you define columns as key (regardless of the operation type), Xplenty defines them as the sort key in a new table. The data types in Xplenty are automatically mapped as follows. Note that since Xplenty doesn't have a notion of maximum string length, the string columns are created with the maximum length allowed.

Xplenty Snowflake
String VARCHAR
Integer NUMBER
Long NUMBER
Float DOUBLE
Double DOUBLE
DateTime TIMESTAMP_TZ
Boolean BOOLEAN

Creating packages

  1. Creating a new package
  2. Create a package from a template
  3. Working in the package designer
  4. Using Components: Facebook Ads Insights Source (Beta)
  5. Using components: File Storage Source
  6. Using components: Database Source
  7. Using components: Google AdWords Source
  8. Using components: NetSuite Source
  9. Using Components: Google Analytics Source
  10. Using Components: Google BigQuery Source
  11. Using components: Google Cloud Spanner Source
  12. Using Components: Bing Ads Source
  13. Using components: MongoDB Source
  14. Using components: Amazon Redshift Source
  15. Using Components: Rest API Source
  16. Using Components: Salesforce Source
  17. Using components: Select
  18. Using components: Sort
  19. Using components: Rank
  20. Using components: Limit
  21. Using components: Sample
  22. Using components: Join
  23. Using components: Cross Join
  24. Using components: Clone
  25. Using components: Cube and Rollup
  26. Using components: Union
  27. Using components: Filter
  28. Using Components: Window
  29. Using components: Assert
  30. Using components: Aggregate
  31. Using components: Distinct
  32. Using components: File Storage Destination
  33. Using components: Amazon Redshift Destination
  34. Using Components: Salesforce Destination
  35. Using components: Google BigQuery Destination
  36. Using components: Google Cloud Spanner Destination
  37. Using components: Database Destination
  38. Using components: MongoDB Destination
  39. Using components: Snowflake Destination (beta)
  40. Using and setting variables in your packages
  41. Validating a package
  42. Using pattern-matching in source component paths
  43. Using ISO 8601 string functions
  44. Using Expressions in Xplenty
  45. Xplenty Functions

Feedback and Knowledge Base