Ingestion APIs are designed to seamlessly interact with both customer data and metadata.
🔹 **Bulk API (CSV)**: Efficiently load large datasets with ease.
🔹 **Streaming API (JSON)**: Real-time data ingestion for dynamic applications.
Transform your data strategy today! #Salesforce #DataCloud #APIs #DataIngestion #DataManagement.
🚀 Data Cloud Ingestion APIs Overview:
🔹 Bulk API (CSV): This API facilitates the uploading of substantial files in batch format, specifically in .CSV format.
🔹 Streaming API (JSON): This API is designed to transmit micro batches of data to Salesforce Data Cloud in near real-time, with processing occurring every 15 minutes. Thus, it enables near real-time data integration.
Choosing Between Bulk Ingestion API and Streaming Ingestion API:
🔹 Bulk API (CSV):
1. Suitable for transferring substantial amounts of data on a predetermined schedule, such as:
- Daily
- Weekly
- Monthly
2. Ideal for legacy systems that permit data exports only during non-peak hours.
3. Applicable when utilizing the new Data Cloud organization and requiring backfilling of data spanning 30, 60, or 90+ days, such as transitioning from a license-based organization to a consumption-based organization.
🔹 Streaming API (JSON):
1. Appropriate for updating small batches of records in near real-time.
2. Designed for data source systems that utilize contemporary streaming architecture.
3. Useful for generating change data capture events.
4. Effective for consuming data from webhooks.
🔹 Ingestion API Configuration: Streaming API (JSON)
1. Configure the Ingestion API Connector:
- Access the Data Cloud Setup and select Ingestion API/New
- Provide a Schema that outlines the structure of the data intended for ingestion by Data Cloud
- Develop the Schema in YAML format
- Upload the Schema within the connector
- After the upload, each event type will be represented as an Object
- Each Object will contain the specified Fields
- Within the Data Stream Object, click on New and choose Ingestion API.
- Select the connector that was previously created during the Ingestion API setup.
- Choose one or multiple Objects, noting that each Object will generate its own Data Stream.
- Define the Category:
- Engagement: Refers to Interactions or Events
- Profile: Pertains to Objects that gather information about an individual
- Other: Involves Objects that signify product inventory
- Indicate which field within the Object will serve as the Primary key.
- If the category is Engagement, also select the time value field.
- Each Object will result in a Data Stream, and each Data Stream will create a Data Lake Object, with the data transmitted through the Ingestion API being stored in the Data Lake Object (DLO).
- Client Credentials
- JWT (JSON Web Token)
- User Agent
- CDP ingest API
- API
- Refresh tokens
- Configure the Client ID and Secret within Postman.
- Initiate a request for the authentication token.
- Send a request to the streaming ingestion API endpoint.
- Utilize the POST method and include the connector name in the POST URL, which was established during the Data Cloud Ingestion API setup.
- Additionally, indicate the object type in the POST URL.
- Upon successful execution, the "accepted" property will be set to "true," and the data will be processed within the next 15 minutes.
- Confirm that the data has been successfully transferred to the Data Cloud (DLO) by accessing the Data Explorer feature and selecting the Data Lake Object (Connector Name).
- Employ Synchronous record validation to verify the payload.
- Ensure CSV file has a header row matching the fields in the data stream created.
- Request a Data Cloud Access Token from the instance_url
- Initiate a Job by utilizing the create job endpoint within the Bulk API of Salesforce Data Cloud Postman.
- In the request URL, indicate both the connector name and the object name.
- After the job has been established, retain the job ID for future reference.
- The job status will be marked as open, indicating that CSV files can be uploaded.
- Proceed to the upload job endpoint.
- Incorporate the previously copied job ID into the URL.
- Insert the CSV data within the request body.
- Conclude the job by employing the close job endpoint.
- Utilize the get job endpoint to track the status of the job.
Comments
Post a Comment