We will be starting our final Sprint with finishing up UW-201. After we complete this story, we will consult with the team about which ticket makes sense to grab next. Pending UW-201 gets completed within Sprint 6.5, this would be a milestone for Emily and I In which all the tickets we were estimated to be assigned to us at the beginning of the Program Increment would have been completed during It.
At the end of Sprint 6.4, we left off with having difficulty passing the already existing test cases implemented for UW-201. Images 1 & 2 show a code snippet of what pieces of code we took from the URWS repository to reuse for this ticket.
Image 1 - URWS retrieve_data.py pretty printing
print("Running script retrieve_data.py with args:\n", f"{('-' * 80)}\n{('-' * 80)}")
for name, val in cla.__dict__.items():
if name not in ["config"]:
print(f"{name:>15s}: {val}")
print(f"{('-' * 80)}\n{('-' * 80)}")
Image 2 - workflow-tools templater.py pretty printing(Attempt 1)
print("Running script templater.py with args:n", f"{('-' * 80)}\n{('-' * 80)}")
for name, val in user_args.__dict__.items():
if name not in ["config"]:
print(f"{name:>15s}: {val}")
print(f"{('-' * 80)}\n{('-' * 80)}")
We first came across the error of the outcome not matching the desired result due to Input_template needing to be a hardcoded value. If It was not equal to our own Individual file path then we would fail three test cases. We were able to resolve this by generating a three-part string, with an input_file variable in the middle to account for the input_template path (in order to avoid hard coding).
Ultimately, we still needed to pass the test cases by changing the outcome in the test case to match the new result. In Image 3, lines 1-2 & 14-21 were already existing as part of our current test case(these could be left alone). Our assign was to add what was being pretty printed from the parsed arguments in the command line to this desired outcome. The tricky part came in between lines 3-13. We saw that if our string didn't match up exactly with what was expected it would fail. This meant that we would need to figure out the specific way to break up into different lines the dashes(lines 3-5 and 12-13). This Involved quite a bit of trial and error, and using the error from the command line to help us figure out the correct amount of dashes on each line rather than counting them out Individually. We also found that In lines 6 - 11, the columns needed to match up exactly and also be Indented a certain amount to pass completely. Once we were able to find this balance we began to pass 1 of the 3 test cases and were able to copy and paste this format for the other two.
Image 3 - Final outcome to pass one of the three failing test cases
1 input_file = os.path.join(uwtools_file_base, "fixtures/nml.IN")
2 outcome=\
3 """Running script templater.py with args:n --------------------------------------------------------------------------------
4 --------------------------------------------------------------------------------
5 outfile: None
6 input_template: """ + input_file + """
7 config_file: None
8 config_items: []
9 dry_run: True
10 values_needed: False
11 --------------------------------------------------------------------------------
12 --------------------------------------------------------------------------------
13 &salad
14 base = 'kale'
15 fruit = 'banana'
16 vegetable = 'tomato'
17 how_many = 22
18 dressing = 'balsamic'
19 /
20 """
The Final Hurdle:
Once we were able to find the correct format for our print statement we were passing the pytests, however under further Inspection we received feedback from the pylinter that our solution was not suggested. Inspecting this feedback futher, we found that line 3 In Image 3 was over the suggested character limited per line. We needed to decrease the count on this specific line to be able to pass the pylinter. Changing the code In Image 2 to be slightly different we were able to successfully pass both the pylinter and the pytests(Image 5 & 6). Our solution was to simply cut down the number of dashes that appeared on each line(lines 3-4 & 11-12, Image 3).
Image 4 - Pylinter Feedback UW-201
Image 5 - workflow-tools templater.py pretty printing
Without Implementation of the pylinter feedback
print("Running script templater.py with args:n", f"{('-' * 80)}\n{('-' * 80)}")
for name, val in user_args.__dict__.items():
if name not in ["config"]:
print(f"{name:>15s}: {val}")
print(f"{('-' * 80)}\n{('-' * 80)}")
Image 6 - workflow-tools templater.py pretty printing
With the Implementation of the pylinter feedback
print("Running script templater.py with args:n", f"{('-' * 70)}\n{('-' * 70)}")
for name, val in user_args.__dict__.items():
if name not in ["config"]:
print(f"{name:>15s}: {val}")
print(f"{('-' * 70)}\n{('-' * 70)}")
In our Tagup, we moved UW-201 into Peer Review and Christina Peer Reviewed the ticket. In our Wednesday DSM we will confirm with the team If 1 comment Is sufficient to merge or a ticket Into the develop branch or If It should be 2.
In our DSM we confirmed with the team that 1 comment was sufficient to merge a ticket Into the develop branch. Officially finishing UW-201 meant that Emily and I finished all the tickets the team had planned for us to do during Program Increment 6. After Speaking with the team, we decided Emily and I would began working on an additional ticket, UW-199. UW-199 will have the largest estimated ticket value at 5 points that we have worked on and it will be our last ticket with the Unified Workflow Team.
What Is the task? And the description of the task
Task: As a Developer I need to create a python method in the Config base class to establish the nested inclusion feature so that INCLUDE behaves the same for all dictable file types
Description:
pyyaml allows us to add a constructor with syntax like `!INCLUDE <file path>` so that we can signal the inclusion of the contents of another file given that we've appropriately defined the actions of what !INCLUDE need to do in a method.
In this work, we want to mimic that constructor behavior for all other dictable file types by writing an include method for the Config base class that will be called by non-YAML config subclasses when this kind of tag is found in an input file.
By default with pyYAML parsing extensions defined with the !INCLUDE notation happen before a dictionary is available to a user in-memory. This order of operations will continue to be assumed for the extension of any of our dictable configuration file types (F90 namelist, INI, JSON, etc.).
What does this mean?
From reading the description and reviewing the acceptance criteria I had several follow-up questions:
What would our starting parameters look like?
What would our returning value look like?
Should we be traversing the entire dictionary looking for an include(!INCLUDE) tag and if we hit one, then do we need to update the dictionary with the contents of that flagged file?
Breakdown:
We had two different files:
config.py (code)
test_config.py (test cases to go with config.py)
Emily & I continued to work on UW-199, although we are finding the task more difficult than the previous tickets we worked on due to UW-199 being a predicted 5-pointer. The team was also able to hold a Meet & Greet to welcome a new team member, Brian Weir. It was great to hear again Christina's and Fredrick's backgrounds before joining the Unified Workflow Team. Brian will began working with the team In Program Increment(PI) 7 starting In the New Year.
Emily and I were able to get a better sense of what our input and output values would look like. I included an example of an input and output to a f90 namelist file:
Input: tests/fixtures/include_files.nml
&config
salad_include = 'INCLUDE [./fruit_config.nml]'
meat = beef
dressing = poppyseed
/
Output: new dictionary
config = {
'salad_include': {
'fruit' = 'papaya'
'vegetable' = 'eggplant'
'how_many' = 17
'dressing' = 'ranch'
}
'meat': 'beef',
'dressing': 'poppyseed'
}
Other files needed: tests/fixtures/fruit_config.nml
&config
fruit = papaya
vegetable = eggplant
how_many = 17
dressing = ranch
/
Essentially, we wanted to iterate over our dictionary passed into our Config Base Class. With this dictionary we would be searching our keys to see If a value began with the 'INCLUDE' tag. If it did then we needed to follow this config_path, for example ./fruit_config.nml, to get the next set of keys/values. Then we would copy and paste the contents of fruit_config.nml for the salad keys/values, replacing the line 'INCLUDE [./fruit_config.nml]' altogether with the contents of the fruit_config.nml file. To understand better how to translate a f90 namelist file Into a working dictionary I used this reference which helped clear up the Intended goal for UW-199.
I Include another example for UW-199 below.
After working with Christina, Emily and I wanted to solidify our understanding of UW-199. Christina was able to provide us with starter files, if the user supplied our code with an f90 namelist file format. We wrote up what the Ideal Input and output would be for the files listed below.
Input file: tests/fixtures/include_files_with_sect.nml
&config
salad_include = 'INCLUDE [./fruit_config_mult_sect.nml]'
meat = beef
dressing = poppyseed
/
Output file: new dictionary
config = {
'salad_include':
config = {
'fruit': 'papaya',
'vegetable': 'eggplant',
'how_many': 17,
'dressing': 'ranch'
},
setting {
'topping': 'crouton',
'size': 'large',
'meat': 'chicken'
}
'meat': 'beef',
'dressing': 'poppyseed'
}
Other files needed:
tests/fixtures/fruit_config_mult_sect.nml
&config
fruit = papaya
vegetable = peas
how_many = 17
dressing = ranch
/
&setting
topping = crouton
size = large
meat = chicken
/
With the end of Sprint 6.5 approaching, we confirmed with the team that UW-199 would carry over passed this sprint and into Program Increment(PI) 7. Since this will also be Emily and I's last week with the team, we decided to clean up UW-199, so In the future whoever picked up this ticket would have some Insight Into where we left off. We were also assigned to create a PR for UW-199, so the Unified Workflow Team would have this branch In the future to work from.
Image 7 - PI 6 Objective Completion
Program Increment(PI) 6
At the end of each PI an Inspect & Adapt(I & A) event will be held to demonstrate completed features and discuss best practices to identify ways for improvement of PI execution.
Looking back on PI 5's Inspect & Adapt, we can see we again are right around the 60% marker for objectives complete. The problem solving part of the I & A, again addressed the ways the team was hindered that were not accounted for. Some obstacles Included: changing tickets within the PI, the objectives not aligning with deliverables, and too many meetings being scheduled that frequently take up the bulk of team members time.
Image 8 - What Didn't Work
With Sprint 6.5 closed out our time with the Unified Workflow Team has come to an end. We'll be finishing up on Monday, November 21st with the team by attending our last retrospective. Additionally, at the end of every PI the EPIC Team hosts an Inspect & Adapt(I & A) event that will demonstrate completed features and the team will discuss best practices to identify ways for improvement for the execution of the next Program Increment.
Closing our PI 6 and Sprint 6.5, we attended a Sprint Planning & Retrospective meeting. We talked more about the long IP sprint In-between PI 6 and 7, directly related to each individual team members capacity. In our retrospective we covered the usual topics: what went well, & what could've been better. Emily and I both felt that Sprint 6.5 could have went better for us If we reached out to Christina earlier for help Instead of waiting for the holiday weekend to pass. Overall, the team did a great job staying on task and completing objectives regardless of meeting fatigue and outside blockers.
To wrap up my time on the Unified Workflow Team, I'll be sharing an additional post to my blog which will reflect on my experiences as a Software Developer Intern at the National Oceanic Atmospheric Administration(NOAA).