Task-level concurrency limits
in progress
Matt Sodomsky
We are making good progress on the feature, updates coming soon.
Kirk Lloyd
You will need to check if there is already a task with the same ID in the queue waiting to retry (not just running).
This is to handle if a user inputs retry times that are greater than the scheduled run times. Or if retry time + run time is greater than scheduled time.
Isaac Bowen
Kirk Lloyd: This is a really good point. I think I'm going to do this
without
the retry, and instead, give each mutex value its own queue of sorts. When a task run (and all its action runs) is completed, it'll look for the next
task run that's waiting and that has that same mutex, and it'll let that
one go next.Kirk Lloyd
Isaac Bowen: Let me see if I'm understanding this concept:
Instead of a "Scheduler", there will be a "Runner". Meaning, jobs are not scheduled to run a specific intervals, rather they are made "available to run" at specific intervals.
There will be a "Runner" that checks if any jobs matching its run criteria have permission to run; If yes, it will run; If no, it wont run?
If this is correct, presumably there wont be more than one job with the same "job/task id" waiting to run, hence, preventing queue stacking, which is my concern.
Isaac Bowen
Kirk Lloyd: The net effect will be the same, but no, it'll still be a scheduler! The scheduler will still queue up runs for each scheduler event; if a scheduler event results in a new task run, and that task run's mutex is "taken" by an already-active task run, it'll get paused until that mutex value becomes available.
Or
(and this is the relevant bit): to prevent queue-stacking, the task author will be able to choose to have thusly-affected task runs be immediately cancelled, instead
of paused for later.Kirk Lloyd
Isaac Bowen: You should default this the opposite way. Default is to auto cancel, author has the option to pause for later.
Isaac Bowen
Kirk Lloyd: Open to it! Please chime in on that here: https://usemechanic.slack.com/archives/C01KF3B4PUK/p1615842957166100
Isaac Bowen
Current design for this is a notion of uniqueness, for task runs. Task authors will be able to express a value for determining uniqueness, using Liquid. Using a value like
{{ task.id }}
will have the result of only allowing a single run per task at a time; using a value like {{ task.id | concat: event.data.email }}
will allow as many simultaneous task runs as there are unique email addresses, across the task run events.If a run is found to
not
be unique, the author will be able to choose between hard failure, and a retry interval (e.g. rescheduling the run for 5 minutes from now).Isaac Bowen
in progress
Matt Sodomsky
planned
Matt Sodomsky
This is needed, dealing with this today :D