This guide will help you get your Data Delivery up and running. In this guide, you’ll learn:
- What is data delivery and how does it work
- Data delivery basics, including key terms and concepts
- What to expect in each step of the data delivery process
Xplenty’s Data Delivery allows you to connect the data source(s) of your choice and replicate that data to any destination. This point-and-click, two-step solution requires no code writing and no technical support - just fast, effective data delivery.
Before you get started, here are some definitions and terms that you should be familiar with:
Source: the data origin - a CRM, database, SaaS application, etc - that you want to pull data from. (Xplenty integrates with over 100 sources, which means that you’ll be able to connect your data no matter where you’re working from)
Destination: a centralized database that houses the data pulled in from your source(s). When Xplenty delivers your data, it will go directly into the destination of your choosing.
Connection: define the data repositories or services your Xplenty account can read data from or write data to. The connections contain access information that is stored securely and can only be used by your account’s members.
Schedule: the frequency that you want Xplenty to replicate the data from the source. For example, if set to once a day, Xplenty will deliver any new data from the source to the destination every 24 hours. Read More
Cluster: an Xplenty cluster is a group of machines (nodes) that is allocated exclusively to your account’s users. You can create one or more clusters, and you can run one or more jobs on each cluster. Read more
Package: the pipeline from your source(s) to your destination. Here, you can plan how the data will look like, how move and where it is going.
Choose the source to deliver the data from and the destination to deliver the data to.
Note: If you haven’t created a connection yet, click on the yellow +New button and follow the instructions for the specific source that you would like to connect to. Each source has different instructions, so make sure that you follow the correct instructions for your connection.
By default, each delivery is scheduled to run once a day. You can always change the frequency by editing the delivery’s schedule directly. For more information about editing a schedule, click here.
The first execution of data delivery will read all of the data from the source i.e. the “full load”. After that, the delivery will be incremental, which means that the tool will only grab any new data that’s come in since your last delivery.
Of course, incremental delivery only works for entities that have a modification timestamp. For objects without a modification timestamp, we’ll run a full load delivery every time. This ensures that your data is always completely up-to-date and that you don’t have to worry about things like missing information or duplicate data.
To learn more about how the data delivery process will work for each of your sources, visit that source’s page here.
Visit the Jobs page to track the delivery process, check the status of each entity and monitor which entities have succeeded and which have failed.