Steps to ensure a successful publish of a topic:
1. Get content(topics/courses/insights) into production DB:
Because there are still edge cases when merging a content PR can throw errors on our backend, making the content not go through successfully through the content consumer and thus not getting added to the production database - it is a good idea to replicate the merge on a local machine.
fork the curriculum repo and connect the a local backend instance to it as shown here
pull monoenki master && npm i
add necessary environmental variables(mentioned in the doc above + stagedb-password)
start db container with stagedb data as shown here
run backend with npm run backend
set-up the webhook as shown in the document beforementioned
copy new content folder(e.g. entire security folder) from curriculum repo’s working branch into the fork
merge the content while the fork is connected to the backend and check for errors
if no errors are thrown content is good to go
if errors are shown → apply needed changes and try again while triggering changes on all files that need to get merged
It’s important to consider the current permitted metadata of topic/courses/workouts/insights.
2. Accommodate content for hybrid app’s soon-to-be deprecated prepare-workouts job
Once step 1. is completed, the content is ready to be merged and therefore added to the DB.
Now, because the hybrid app users still use is relying on on-the-spot-generated workouts served by the old perpare-workouts job we need to make sure the new content accommodates the job’s algorithm.
make sure one of the topic’s courses has core: true in its model. If this is not present, the algorithm will fail generating workouts as it has to start from that specific course
The old workout generation algo has no notion of pre-composed workouts as they are reflected in the content(and also in mongo’s data). Therefore, custom workouts should be created in order to ensure that each workout the user is presented with will have the desired insights.
custom workouts(customInsightList model) can be created here once the data is live https://enkipro.com/editor/#/review/workouts - slugs can be used for insight identifier as well
make sure corresponding levels to sections are properly chosen
make sure all insight files in github have in their metadata inAlgoPool: false - this will ensure that they won’t be picked to form random workouts by prepare-workouts job
Now, to order the workouts between them we need to make use of some insight metadata, specifically the parent: insightSlug field. Say we have the workouts A → B → C. To ensure they are ordered like this then the first insight of WB must have parent: workout-a-last-insight-slug. Similarly, the first insightof WC must have parent:workout-b-last-insight-slug.
Note that these are independent for each section/level.
make a PR on GH with the parent field added and merge it
Last, but not least, the first workouts a user will be served with are hardcoded workouts. These need to be pushed and deployed on the monoenki repo. The code for these is here.
Basically a topicName.js file needs to be created and then added to the index.js file linked.
This file follows the format:
exportdefault {
// beginner level id (section 0)
'578cb033c774cd4d3949b82a': [
'custom-workout-core-subtopic-1-id',
'custom-workout-core-subtopic-1-id'
// another possiblity is to have an array of insight ids instead of custom workout id, but it's easier with custom workouts
1. Get content (topics/courses/insights) into production DB:
2. Accommodate content for hybrid app’s soon-to-be deprecated prepare-workouts job
export default {
// beginner level id (section 0)
'578cb033c774cd4d3949b82a': [
'custom-workout-core-subtopic-1-id',
'custom-workout-core-subtopic-1-id'
// another possiblity is to have an array of insight ids instead of custom workout id, but it's easier with custom workouts
],
// basic level
'554373512de2f98af6ea5bc6': [ ],