Comments (2)
I'll attempt a self-answer based on some experimentation I've been doing, but please tell me I'm wrong if you know otherwise:
The various scheduled instances of a task X are always attempted serially; at the moment, no setting is available to change this. depends_on_past
sounds like it might do it, but that setting only controls whether a later instance of the task can be scheduled if the prior one has been attempted but failed, not whether the task's instances can be run in parallel.
As such, my initial question doesn't make sense. Only one instance of a task will be scheduled to run at a time, and from there it's simply a matter of whether the sensor triggers or not. This does mean, though, that if you have a sensor that never triggers for some reason, your schedule will be totally blocked by that. So that may mean that airflow isn't really designed for passively waiting for something to happen so much as it is for actively checking for things to do on a schedule.
As a design consideration: if an early task in a DAG is able to determine that there's nothing to do when its schedule rolls around, it would be nice to be able to quickly mark the rest of that scheduled DAG as complete when it makes sense to do so. Perhaps this could be signaled by a "skipped" status, along with mechanisms for allowing downstream tasks to determine their own behavior based on the completion statuses of their upstreams. Food for future thought, unless that's something you guys have already considered and discarded, in which case I'd love to hear more about why.
from airflow.
You're right, as it is now the scheduler won't parallelize or fill in holes. Note that airflow backfill
does though. The way the scheduler operates is documented here:
http://pythonhosted.org/airflow/scheduler.html
depends_on_past
keeps from scheduling forward after a failure. And forces sequential execution for backfills (who do parallelize not depends_on_past
tasks).
Adding a skipped
or upstream_failed
status as you suggested would allow for non depends_on_past
tasks to move forward without changing the current scheduler logic much. I haven't done it yet because I may want this status to be virtual, but it requires more complexity in the scheduler and could affect the scheduler's weight on the metadata database and cycling time. We like for the scheduler to run every minute, even with 10k+ tasks which could grow 10x over the next few years.
The upstream_failed
status is probably the way to go, though since the scheduler never moves back to fill in holes, it would require running backfill commands to fill in holes left behind. Maybe once we have a UI wizard to do that it won't matter much.
I also may want to trigger the current latest schedule regardless of what the DAG is up to, that brings back the problem around filling holes.
from airflow.
Related Issues (20)
- "Test" airflow connection does not work HOT 7
- Redshift task in running state but SQL execution does not start when OpenLineage is enabled HOT 2
- S3 ObjectStorage can't use AWS connection with role HOT 2
- Param schema `array` type does does not work with `items` element HOT 3
- Import error : need to restart dag processor
- Opensearch hook passes auth even if login and password are blank
- Error while running Airflow CLI Commands using BashOperator HOT 1
- Can't use AIRFLOW__CELERY__BROKER_URL_CMD in the Helm Chart HOT 5
- get_parsing_context().dag_id always returns None for LocalExecutor HOT 4
- Improve development experience for `breeze k8s` suite of commands HOT 2
- Numerous error reports in airflow logs due to function argument mismatch HOT 5
- KubernetesPodTrigger emits timeout for running pods with unstable Triggerer HOT 2
- Statsd exporter mappings missing HOT 1
- Support `limit` and `offset` parameters for List task instances (batch) endpoint HOT 5
- Dynamic dataset definition
- google providers 10.19.0 bigquery operators directly depends on openlineage package HOT 1
- Configuration option "max_templated_field_length" does not work on the rendered template content HOT 3
- What to do about triggerer?
- DbtCloudRunJobOperator `reuse_existing_run` flag does not work properly HOT 1
- Airflow Tasks Failing on Trigger because of: "dependency 'Task Instance State' FAILED: Task is in the 'failed' state." HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from airflow.