Using components: Google BigQuery Destination

Use the Google BigQuery Destination component to store the output of your package in a Google BigQuery table. 

Destination components are always the last component in a package.

To define the Google BigQuery destination:

  1. Add a Google BigQuery Destination component at the end of your dataflow.
  2. Open the component and name it.

Destination settings

  1. Define the parameters for connecting to your BigQuery as follows (for more information, see Allow Xplenty access to my Google BigQuery dataset):
    • Google bigQuery connection - either click the drop-down arrow and select an existing connection, or click create new to create a new connection.
    • Target table - the name of an existing table in your database. If the table doesn't exist or cannot be created, a failure will occur. Data can be inserted into a partitioned table by specifying the partition to insert into using the notation table$partition (e.g. events$20160405).
    • Max bad records - if loading data into Google BigQuery returns fewer errors than this value, it continues without failing.
    • Operation type - the method of data insertion
      • Insert - default behavior. Data will only be appended to the target table.
      • Truncate and Insert - truncate the target table before the data flow executes and insert the data into the target table.
      • Merge - data will be inserted and/or updated in the target table according to the defined key(s). See details below.
    • Create table - check Create table if you want Xplenty to attempt to create a table. If a table with the same name already exists in the data set, data insertion will proceed, unless the mapping, as described in the next paragraph, fails.     

    Then click the Test Connection button to help check that the connection is good and that the target table exists.

  2. Map fields from your dataflow to columns in the target table. You can click the green Auto-fill fields icon to automatically populate the list of fields and column names. When mapping fields to table columns, take into consideration the mapping of data types. If the data types are not compatible, the process will fail. For example, the job will fail if you try to store a string field in a numeric field.

Merging data into an existing table

The merge operation is done in the following steps:

  • A staging table is created and the data flow's data is stored in it.
  • Note: The incoming data must be unique according to the key fields you selected. You may use the aggregate component or the limit component to make sure that the key fields are indeed unique.  A workaround for this kind of failure can be to use a Limit component and add the key field/s as a partition and limit it to 1 thus removing duplicates.
  • The target table is copied onto a new table (target-copy) and is cleaned up.
  • Data that exists in the staging table that doesn't exist in target-copy is inserted into the target table.
  • Data that exists in both staging table and target-copy is joined, columns values are taken from the staging table unless they are null.
  • Both staging table and target-copy table are dropped.
  • In case of an error, the process tries to copy the target-copy back on to the original table.

Creating packages

  1. Creating a new package
  2. Create a package from a template
  3. Working in the package designer
  4. Using Components: Facebook Ads Insights Source (Beta)
  5. Using components: File Storage Source
  6. Using components: Database Source
  7. Using components: Google AdWords Source
  8. Using components: NetSuite Source
  9. Using Components: Google Analytics Source
  10. Using Components: Google BigQuery Source
  11. Using components: Google Cloud Spanner Source
  12. Using Components: Bing Ads Source
  13. Using components: MongoDB Source
  14. Using components: Amazon Redshift Source
  15. Using Components: Rest API Source
  16. Using Components: Salesforce Source
  17. Using components: Select
  18. Using components: Sort
  19. Using components: Rank
  20. Using components: Limit
  21. Using components: Sample
  22. Using components: Join
  23. Using components: Cross Join
  24. Using components: Clone
  25. Using components: Cube and Rollup
  26. Using components: Union
  27. Using components: Filter
  28. Using Components: Window
  29. Using components: Assert
  30. Using components: Aggregate
  31. Using components: Distinct
  32. Using components: File Storage Destination
  33. Using components: Amazon Redshift Destination
  34. Using Components: Salesforce Destination (Beta)
  35. Using components: Google BigQuery Destination
  36. Using components: Google Cloud Spanner Destination
  37. Using components: Database Destination
  38. Using components: MongoDB Destination
  39. Using and setting variables in your packages
  40. Validating a package
  41. Using pattern-matching in source component paths
  42. Using ISO 8601 string functions
  43. Using Expressions in Xplenty
  44. Xplenty Functions

Feedback and Knowledge Base