Comments (3)
For Q1
If "inc" is listed as both required and optional for target_outcome
then isn't that the same as just making it always required? In general, this feels really messy to me, and is an argument for less flexibility in the specification of targets that span multiple task-ids. It feels incredibly complex to specify any validation logic across the required/optional and multiple keys. My instinct would be to make it simpler. E.g.
- if you do have task-ids split across multiple variables they are all required or all optional?
- if you have a mix of optional and required targets, then they have to be specified by a single task-id?
For Q2
I think it is only the values that will vary.
from hubutils.
First, just a pointer that we've discussed how to interpret required
and optional
values for target id variables in this issue. To sum up, to get the set of rows that are required, within each task id group you take all combinations of "required"
values for each task id variable. I agree with Nick that it is confusing to allow for a single variable value to be listed as both required and optional within the same task id group, and I basically think the hub tasks json validator should throw an error if this is done; there is some additional discussion about this in the last comment on the thread linked to above.
Q1: Restating the objective to confirm we're on the same page: we want to create two targets corresponding to "inc hosp"
(required) and "inc case"
(optional), split up by target_variable
and target_outcome
. I think there are are two possible ways to achieve this within the system we've laid out:
Option 1: single model task group, target_outcome
is required.
Here's what the json specification could look like:
"model_tasks": [
{
"task_ids": {
"target_variable" : {
"required" : ["hosp"],
"optional": ["case"]
},
"target_outcome" : {
"required": ["inc"]
}
},
"target_metadata": [{
"target_keys": {
"target_variable": "hosp",
"target_outcome": "inc"
}
}, {
"target_keys": {
"target_variable": "case",
"target_outcome": "inc"
}
}]
}
]
Following the logic outlined in the discussion on that previous issue, the set of required rows in a submission file is obtained by taking all combinations of required
values for each task id variable and output type/type id. In this example, for brevity we're omitting the output types, but basically we end up with one required row, which would have values target_variable = "hosp"
and target_outcome = "required"
.
Note that in this option, the "target_outcome"
is listed as required, but according to some long-buried conversation, that should be interpreted as "required if values of other task id variables are submitted". In this example, that means "if anything is submitted for target_variable = "case"
, then it is required to have target_outcome = "inc"
."
Option 1: two model task groups, target_outcome
is required in one and optional in the other
Here's what the json specification could look like:
"model_tasks": [
{
"task_ids": {
"target_variable" : {
"required" : ["hosp"],
},
"target_outcome" : {
"required": ["inc"]
}
},
"target_metadata": [{
"target_keys": {
"target_variable": "hosp",
"target_outcome": "inc"
}
}]
},
{
"task_ids": {
"target_variable" : {
"optional": ["case"]
},
"target_outcome" : {
"optional": ["inc"]
}
},
"target_metadata": [{
"target_keys": {
"target_variable": "case",
"target_outcome": "inc"
}
}]
}
]
from hubutils.
Thanks @elray1 for the clarification (and nudge towards the previous discussions)! I know how to handle this in the checks now i.e. optional & required not important and the unique values of the combined optional & required for each target task id individually the only thing of importance to check, hence check 5 can be completely removed.
Regarding the options for handling it in the schema I feel our schema is flexible to handle both cases so we can leave that up to the hub admins. The key will be to have good documentation on this (i.e. include a lot of your explanation in the issue you linked to) which we've already discussed doing.
from hubutils.
Related Issues (20)
- Export `json_datatypes` for use in hubValidations HOT 1
- Errors in flusight example hub quantile values data type HOT 2
- validate a hub's `model-metadata-schema.json` config file
- function to load model metadata HOT 6
- validate existence and correct formatting of `hub-config/model-metadata-schema.json`
- create man page for `model_out_tbl` class
- split this package into three packages for different intended audiences HOT 8
- `create_task_id()` for `origin_date` returns an error if required = Date object HOT 3
- Improve output message of connect_hub() HOT 1
- Update authors HOT 4
- add hard-coded global variables or datasets with common location codes HOT 4
- argument alignment for create_output_type_*() functions HOT 2
- minimum and maximums for output_types HOT 1
- problem specifying value_minimum and value_maximum in create_output_type_sample() HOT 2
- Upgrade Docs to hubStyle
- Replace ~ with \(x) in purrr calls that trigger object_usage_linter errors HOT 2
- [WIP] Package split questions HOT 2
- Add function like `get_task_ids_tbl`
- [ORG NAME CHANGE]: Update repo to hubverse-org organisation name HOT 1
- Function to get ordered levels of `output_type_id`s for pmf or categorical output types
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from hubutils.