by Andrew Hogan
May 30 - June 4
This week I successfully completed my first ticket for the active 4.4 sprint. My task was to format a logger to pass pylint standards (discussed in Week 2). The task was simple enough: the installed pylint package informs the user of aspects of the script that need formatting and modification to meet PEP 8 standards, which is a standardized guide of coding conventions for the language that developers intend to follow for consistent coding practices.
I was guided through the process of working on a repository on my local machine. I had already simulated this process before in classwork and self guided practice, but sometimes the process looks and feels comparatively different when working within a living, breathing software environment, especially when it came to the nature of pull requests, which are essentially peer reviewed submissions of commits before actually merging with the development branch. I was appreciative of my team taking the time to guide me through setting up my VS Code environment and walking me through the steps to clone, pull, stage, commit, and push changes to their GitHub repository.
Modifying the logger was simple enough, and it was a neat experience in "expecting the unexpected." My team discovered that some of the conventions of pylint have been discontinued, causing issues with white spacing and formatting to not pass standards in odd places. After some investigation, it appeared that some workarounds were available to solve the issue (i.e. disable the features blocking the script from passing), but we also cautioned the use of these methods, as they are generally needed to prevent other issues from emerging in other areas of the project. After some investigation, it appears this bug may extend just a little further beyond the scope of my ticket, and we will need to explore other methods. But I can at least mark it as "peer reviewed" and it will be revisited at a later time -- perhaps next week when we begin our next sprint. Overall, I was enthused with the opportunity to contribute and actively explore the stages of software development (and Googling when stuff doesn't make sense~).
(Shoutout to Ryan and Christina for the excellent onboarding, walking me through setup, and encouraging and actively participating in my learning process.)
For this sprint, I served a more active role during discussion about the previous sprint.
Generally, meetings consist of discussing the good, the bad, and any actionable items going forward. Afterwards, we plan the next sprint's story/scrum board and decide on story points needed to complete tickets. Stories are the smallest units of the agile framework that collectively describe a software user's experience with a particular program, its feature, etc. Story points are abstract values representing units of time needed to complete a task for that story within the active sprint. These points could describe hours, or even days. The important part is that everyone on the team is in agreement about what this estimate symbolizes so that the team understands the time commitment needed for all tasks within the sprint and can accurately divvy up the workload amongst members.
This time around, we used a tool called "Scrum Poker." The scrum master selects the next sprint's tasks, and members of the team anonymously propose/vote an estimate of the amount of time each task requires. For example, a team of 5 voting on ticket #1 selects values (5, 3, 3, 2, 1). On average, everyone agrees the ticket is estimated to require ~3 story points to fulfill within the current sprint. After these selections, generally if the values are spread apart, we sometimes open for discussion as to why we chose the values we did. Factors can include personal bias ("I have the skills needed to complete this task adequately, and I can complete it quickly and competently"), prior experience ("We've completed tasks like these before, and they generally take this length of time"), and foreshadowing error or any hang ups ("This ticket will require us to wait on another team completing a task on their end"), among many others.
It was an interesting experiment that culminated in team members moving on average toward more unanimous estimates, which can be healthy for a cohesive workflow when team members understand the strengths and weaknesses of the group as well as realistic time estimates for work that needs to be done.
From the perspective of my active team, these discussions were sometimes heated passionate, but it was impressive to see my team members voice differing opinions regarding their own process. I've enjoyed being on a smaller, controversial, contentious, closely knit team unafraid of sharing opinion for the sake of betterment. At least for me, I was enlightened to much of what could go wrong -- and right -- about the workflow.