Integrating your product or content feeds allows Blueshift to include relevant and up to date recommendations in your messages. One way to import catalog data into Blueshift is to set up feeds for JSON or CSV file uploads from your Amazon S3 bucket.


  1. Prepare the catalog data that you want to import and upload it to your Amazon S3 bucket. For information about data types, catalog attributes, and data formats, see Import catalog data.
  2. Upload the data to the S3 bucket.
    • You can use the S3 bucket that is provided by Blueshift. For more information, see Blueshift’s S3 location.
      • The default S3 path is s3://bsft-customer/<sitename>/import/catalog_imports
    • You can also use your own Amazon S3 location by setting up integration of Blueshift with Amazon S3 and configuring at least one adapter.
    • Ensure that your S3 bucket has a CORS configuration.
    • The following information is required to set up integration:
      • Your Amazon S3 credentials
      • The S3 file path. For example,
          bucket: bsft-catalogs

Set up an import task

To set up a catalog import task, complete the following steps:

  1. To import catalogs, go to Catalog in the left navigation. Click +CATALOG.
  2. Select Upload via S3 Bucket.


  3. Add a Name for the task. The import task form opens.


  4. In the Destination section, you can see the type of data being imported as Products.
  5. Set up Notification Preferences to send a notification email to the specified email addresses when there is a change in the status of the task or when a certain percentage of records fail during import.
  6. In the Source Configuration section, select the adapter that you want to use for the import task.
  7. Enter the path where the catalog CSV file is located. You can see the complete file path for verification.

    Note: The S3 Bucket details and the S3 base path are the same as you have set in the adapter that you have selected.


  8. In the File Settings section, select the file format as either CSV, JSON, or XML.
  9. Select the Encoding and the Delimiter that is used.


  10. Sample data consisting of 10 records is fetched from the source file in the S3 bucket. This data is displayed in the Configuration section.


  11. Map the fields from the imported data to the fields in Blueshift and specify the Destination Data Type.
    • The Source Attribute Name is the attribute in your file and the Destination Attribute Name is the attribute in Blueshift.
    • Source attributes in the imported data are auto-mapped to destination attributes in Blueshift.
      • To clear the auto-mapping, click the Clear Destination Attribute Mapping icon.
      • To restore auto-mapped suggestions, click the Reset Destination Attribute Mapping icon.
    • You must select a corresponding destination attribute for each source attribute. Blueshift attributes and Custom attributes are grouped separately in the drop-down list so that you can easily distinguish between them.
      Note: Only source attributes that are mapped to a destination attribute will be imported.
    • For source data of floating point numeric data type, select the matching data type in Blueshift as Decimal.
    • One column from the source data must be mapped to the product attributes: item_id and item_title and item_url and main_image.


  12. Specify the item availability pattern. For example, “in_stock” for IN STOCK PATTERN indicates that an item is available and “out_of_stock” for OUT OF STOCK PATTERN indicates that an item is not available.


  13. Map the Item Category and Item Tags.
    1. Use the Split a field option for Category and Location Item tags if the hierarchy for these is captured in a single field. For example, "Travel > Europe > Italy". If you select the Split a field option, you must select the correct incoming attribute header and then select the appropriate delimiter. In this example, the delimiter is ">".
    2. If the category or tag hierarchy is captured in more than one field in the incoming file, use the Select Field(s) option to select multiple headers. Ensure that each header is a single string and not a delimited value.
  14. Click Check Data Quality to verify that the imported data has the right values for all the records, based on what the field mapping.
    • Ensure that all fields are mapped.
    • Fix any data quality issues identified for better quality data and to ensure that data import is successful. You can download the data quality report in JSON or CSV format.


  15. Click Test Run to verify that the Destination Data Type is mapped correctly. A maximum of 10 records are fetched during this test run.


  16. Verify that the data mapping is done correctly. Edit the data mapping if required. Click Test Run again after you make the changes.
  17. In the Schedule section, set the Start Time for the import task.
  18. To set up a recurring task, under Schedule select the Is it a recurring data import? option.
  19. Set the frequency using the Task executes every field. You can set the frequency in minutes, hours, days, weeks, or months.


  20. Click Save to save the task.
  21. Click Launch to run the import task.

You will receive an email confirmation after the catalog has been uploaded. The email includes information for both processed and failed records.

Import task status

The index page for catalog imports indicates the status for the catalog import task as either Draft, Launched, Paused, or Completed. For more information, see View catalog upload status.


Was this article helpful?
0 out of 0 found this helpful



Please sign in to leave a comment.