Concourse uses POST over HTTPS to allow for automated processing of feed files. This functionality allows you to control how frequently you run feeds and immediately receive information about what was successfully processed or had errors.
To begin, you should be familiar with how to create and process feeds manually, as the same file standard is used. From there, you'll need to create a request that periodically passes this data to Concourse.
Generating a valid request
Generating a valid feed processing request requires three pieces of information:
- Type of feed file to be processed
- Shared secret shown on the Admin > Tools > Feeds area of Concourse
- Feed file conforming to specifications found in our feed processing article.
Concourse accepts a POST request to https://yourdomain.campusconcourse.com/process_feed_file containing three parameters:
- type (case-sensitive): One of the following:
- System Data types - User, Course, Section, Registration
- Data Removal types - DisableUser, MarkCourseForDeletion, DropRegistration
- Domain types - Campus, School, Department
- Syllabus Content types - MeetingTime, ContactInformation, Description, Objective, Outcome, Material, Deliverable, Evaluation, CoursePolicy, InstitutionalPolicy, AdditionalItem, Schedule
- Delete Locked Item types - DeleteMeetingTime, DeleteContactInformation, DeleteDescription, DeleteObjective, DeleteOutcome, DeleteMaterial, DeleteDeliverable, DeleteEvaluation, DeleteCoursePolicy, DeleteInstitutePolicy, DeleteAdditionalItem, DeleteSchedule
It is important to note that the request must be made to the secure URL as shown above. A log of processing activity (errors, updates, etc) will be returned as plaintext.
The attached post_feed.sh script contains an implementation of the above POST.
post_feed.sh url type secret filename
$ ./post_feed.sh https://demo.campusconcourse.com/process_feed_file Course not_really_the_secret_123 course_feed.txt
The most common issue setting up automation is the inability to verify the SSL certificate. This page describes the cause and various solutions. In most cases adding the --cacert option will be the optimal resolution.
Often feed processing will be completed within seconds. However if you are processing feeds with thousands of rows, and many of the rows are being used to create new users, courses, or content, they may take minutes and will likely time out using the default automated feed processing URL.
In order to reliably process feeds of this scale you need to change the processing URL to include -notimeout after your organization's sub-domain (e.g. https://school-notimeout.campusconcourse.com/process_feed_file).
Further, even if only one feed is expected to take a long time, and only at certain times during the year, you should make sure all of your scripts utilize the notimeout URL to avoid a scenario in which a preceding feed is inadvertently not processed but the subsequent one is.