Project Reflection:
What went well?
Once we'd figured out what we were going to do for our modification, it was simple enough to establish some rules for gameplay. As a team we get along really well, so there were no major clashes of opinion or issues. I didn't have any trouble communicating with the others or feel like I wasn't being listened to. We stuck with our core idea of a market system throughout the process.
What didn't go well?
It took us a while to figure out a modification that was going to fit the brief, but also be realistic enough to be playable. We grappled with the idea of adding an extra resource to Catan, but eventually had to come up with something else. We also had a few minor issues with testing it out, as it was difficult to find a time that the whole group could gather to play our modification and record data. I had a technical problem at home where I couldn't access the internet for a bit, although that got resolved just in time to contribute to the team document.
So what did I do?
I helped work out how our in-game economy was going to operate. Specifically, I suggested that we keep the starting prices for resources at the same value to avoid having the market tank, as it had in our early trials. There wasn't much I could do about my wifi problem, unfortunately, but I managed to finish my piece of the assignment and submit it to the group document on time.
What would I do differently?
If I had the opportunity to redo the assignment from the beginning, I would definitely want to work on keeping records of our progress. We got a little caught up in playing our modification and forgot to take meeting minutes. It's not something I'm familiar with, so I need some practice. I'd also pick a game that isn't quite so complicated as Catan; there was constant debate about the rules even as we attempted to modify them. We wanted to choose Monopoly but were told we could not.
Now what do I need to learn now?
In future, I'd like to learn how to coordinate a group assignment like this that requires frequent face-to-face meetings. So far, much of our discussion happens online which is great for keeping everyone updated and involved, but I want to be able to organise team projects that happen in person.
Project Reflection:
What went well?
At the beginning of this project, we decided to do a card-based party game in a similar vein to Cards Against Humanity. We had lots of fun creating a list of scenarios and funny, edgy responses and quickly smashed out around thirty to forty cards. We also came up with the idea to vote on a winner, instead of the traditional dealer/Card Czar where one person makes that decision, instead of the whole group. However, refining this was tricky and took a good few weeks.
The presentation itself also went well, and the feedback we received was very helpful in moving away from being basically "Cards Against Humanity 2".
How did I contribute?
Personally, I contributed by designing the cards as if they were going to be actually produced. These would be used in a future pitch, once we had ironed out all the issues with our game. I didn't have to edit these at all, as the group was happy with them straightaway. Prior to presentation, I proof-read our slides for coherency, grammar, and to make sure that we were all going to be on the same page when we stood up to deliver our pitch. The purpose of this was also to ensure that nothing new had been added without having a group discussion first.
It was my idea initially to have Special cards, or abilities, that would add an element of chaos to our game. Afterwards, these got edited and changed quite a lot so I can't say that I was the only contributor to them. Also, I suggested a number of Scenario and Response cards that went into the final List. These were also tweaked multiple times but I tried to keep them approximately contained to our heist theme.
I did most of the refining of the points system, working out that the number of votes a card received would translate to the number of points the player scored for the round. My thinking behind this was in response to the feedback that the Special cards were essentially worthless because there was no incentive to buy them, if players were only earning a total of one point per round. By allowing players to score multiple points on a round, even if they did not win, we could keep the abilities mechanic.
What didn't go well? What did we do about it?
Throughout pre-production, we were keenly aware of how similar our game was to Cards Against Humanity. Part of our early feedback was that it had too close a resemblance, and that in order to continue, we would need to come up with several unique mechanics. This was challenging, because most of our proposed changes were superficial at best. Eventually we came up with a few, but getting there took a while. It was hard to formulate and define our Special cards. Although we had trouble with it, it was my idea to have abilities to disadvantage other players as well. Unfortunately, they ended up still not being ideal and needed more work.
Another way in which we tried to distance ourselves from Cards Against Humanity was to introduce a points-based system. This was to give players a solid, tangible objective to win: whoever has the most points, wins the game. Additionally, a simple voting mechanic was created to allow this whole system to operate. According to our feedback after the pitch, we had done well in establishing gameplay pillars, but they needed to be more specific. We had 'humour' as one, which would've been far too subjective and broad.
Communication was another problem we encountered; we struggled to get hold of Taleisha for a bit and were unsure how she was going to contribute. I brought up our concerns that we couldn't get through to her, and found out that she had a lot going on at home that was impacting her ability to actively participate. I tried to make sure that she also had things to do to contribute but it was really difficult when much of the work happened in class.
Otherwise, our communication problems were mostly down to either not understanding what was being suggested or not being able to effectively communicate an idea. I found that I was doing a lot of mediating - trying to figure out what was being said and then translating it for the rest of the group. I wasn't thrilled about that, but I also didn't want us to have any arguments. Fortunately, we didn't end up having to get anyone else involved. The topics that gave us particular grief were around agreeing on the ideal number of cards in a hand, and the number of rounds per match. We decided to start with five to seven cards in hand and determine during play-testing if it needed changing.
What can I do differently in future?
I will need to do better by recording what happens during meetings and keeping a changelog for every assignment. Now that I know more-or-less what a good changelog looks like, and about software like Trello, I think it will be easier to do in future. In hindsight, we didn't track our changes too closely for this Brief, which means that we can't say for sure exactly who did what and when. In addition to hurrying to get our presentation ready, I think we could've benefitted from spending more time working out the issues with our voting system before trying to pitch it. Lastly, I personally can make more of an effort to be patient; a couple of times I got frustrated because I was struggling to communicate effectively.
Now what do I need to learn?
It's not so much something I need to learn as something I'd like to get better at, but keeping a Trello board (or similar) during assignments. This would help me make sure nothing is overlooked, and to have a record of how I contributed to the project. I feel like I'm getting better at managing my time, in that I'm no longer leaving my assignments to the last minute - probably thanks to the nature of group work.
Spreadsheet containing the collected data from our play-tests. Between Tests 3 and 4, we implemented an updated version of the voting system. It was also at that point that someone pointed out that we were needing to reshuffle the Response deck during play, so I started keeping track of that too. Three cards ("the wrong backpack", "Mystery Inc." and "fixing the races") were removed during later stages of play-testing because their rate of going unplayed rendered them useless to us. A few others, such as the "James Bond" cards, had a 4:3 win-unplayed ratio but the consensus was that they were too funny to remove. The reason that both went unplayed during Play Test 8 was that the player had them in hand, and was waiting for a Scenario that required two cards.
Project Reflection:
What went well?
During this assignment, we had several things go really well. Play-testing was definitely the best, as we had created a simple game that could be played in approximately ten minutes. Because of this, we were able to obtain lots of data from multiple play-throughs, occasionally outsourcing to other groups for their input. All the feedback we received during these particular trials was in the vein of adjustments to the systems, such as how Special cards could work. We took this on board and decided that we would shuffle Special cards into the Response deck and make the two look very similar, so as to avoid a scramble to grab a Special card when it surfaced. The other major feedback we got during play-testing was that our ratio of Response to Scenario cards wasn't right. By looking up the ratios for games like Cards Against Humanity, it became clear that we needed far more Response cards.
Additionally, from the data collected, it was easy enough to identify which cards weren't being used and remove them. This occurred at various stages throughout development, but some of the earliest removals were generally for the reason of being too specific or obscure to be funny. Several cards were aimed at a specifically Australian audience, despite our target audience being the wider demographic of anyone over the age of 18. We hadn't accounted for such bias at the beginning of the project, but we became more aware of it and actively attempted to avoid it.
Thanks to Andrew, we also have a changelog of what we did during meetings (final three pages of the GDD). It also includes changes we made outside of class, and discussed in our group Discord chat. He kept meticulous track of who added which cards and when, as well as the reasons for why some were removed or changed. For the most part, we had fun with this project. Although it was sometimes difficult to refine our ideas enough to be useful, once we had a working concept it became easier to identify and fix problems.
How did I contribute?
I volunteered to manage the data we gathered during play-testing, and did so by creating and updating a spreadsheet with all of this information. During the course of Brief 2, I wrote up a list of things to keep track of (see Week 9's Learning Journal) but realised that it wasn't ideal. A few of those points were going to be too difficult to collect. As a result, I came up with a new list;
number of play-through's,
number of rounds and players per match,
whether or not the special cards were being included,
the winning card(s),
any cards that didn't get played - i.e. remained in a player's hand for the entirety of the game, and
the length of each round, and thus each match.
Towards the end of the project, I also put together the rulebook. I drew the information from the draft version, as well as the notes we'd taken in the GDD as development progressed. I shared it with the group and quickly received some minor feedback, but it was otherwise accepted. This feedback was in reference to 'corner-cases' where a match ends in a draw. I realised that I hadn't included any mention of what to do in that situation, so I added a little more instruction.
Once we'd developed a list of Scenarios and Responses, and were ready to test our game, I made some rough prototypes for the physical cards. They were simple card with questions and answers written in marker, but it was a definite step-up from the scraps of paper we had for our very first test. We never planned to have printed prototypes, mainly because the number of cards, and the design being full-coloured, would've been far too expensive. I also made 'sketches' of our Special cards once we'd come to an agreement as to what they were. Because this took a while, we didn't have functional Special cards until around Play Test 4.
What didn't go well? What did we do about it?
Although lots went well, there were some things that didn't. To begin with, our communication issues from the last assignment were ongoing, boiling down to differing opinions and ideas.
Throughout this project, I had some issues with Andrew. It was never anything worth taking up with Mark, but I felt that he wasn't always treating the assignment like the group project it was. He absolutely had days where he was a wonderful team-member and made lots of good contributions (having tokens to track voting and players' points, thus streamlining the whole system, was his idea) but on other days he was difficult to work with. I found him to be somewhat rude, such as turning his back while I was still trying to talk to him. I spoke to Matt to confirm that it wasn't just me, and he agreed that Andrew was being antisocial with us.
I tried to let it go because it was coming up on the end of the trimester and everyone was tired, but eventually it got to a point where he would become offended when we tried to suggest adjustments that he didn't agree with. He tried to add a handful of Responses to the game without checking with the rest of us if they were acceptable. We'd had a clear policy right from the get-go that cards needed to be on-theme as much as possible, but that exceptions could be made if everyone in the group thought they were funny enough.
Having said this, some of the new ideas were good, and I asked Matt and Taleisha to look over them as well so that we could officially be in agreement and add them. When I tried to bring it up with Andrew why a couple weren't going to make the cut (such as being too specific), he became annoyed and barely spoke to us for the rest of the lesson. I felt that he had lots of interesting thoughts and ideas, but wasn't really willing to accept the fact that we were a team and therefore equals.
That was by far the worst problem we had with this assignment, and everything else was significantly more manageable. In the beginning, we forgot to keep track of round lengths and un-played cards. However, as we got more into the swing of play-testing, it became easier to remember to time each round. On a few occasions we made mistakes with this, and I had to substitute average times for actual ones. Our match times had some variant, typically of between one and four minutes. The average time for a match was around seven minutes.
Another hiccup was getting everything done in time. I had to chase the group a little to get the rules finalised so that we could complete the rulebook. In the end, I did it myself and shared it for input so that we would all have something to turn in. Additionally, we couldn't get reach our desired number of Response cards. When we researched the ideal ratio, we decided that eighty Responses and thirty Scenarios would be our goal. Unfortunately, we only managed to achieve about half of the Responses in the time that we had. Coming up with enough cards that everyone agreed were funny was extremely difficult, and turned out to be too much of a task for our timeframe.
What can I do differently in future?
Going forward, I will try to be a bit more organised on the whole. At times, I felt out of my depth between all of my assignments and that could've been alleviated by having designated times for working on each one. Plus, if at least one person has a solid 'plan of progress' - i.e. knowing what needs to happen and when ahead of time - it might help minimise any communication and coordination problems. I will need to make sure that all the material needed to run any kind of testing is available as soon as possible. The example for this assignment being getting the Special cards finalised and prototyped before getting halfway through play-testing.
Now what do I need to learn?
From now on, I'd be willing to learn how to divide up the workload nearer to the beginning of a project, to make sure that everyone always has something to do. My concern is that it becomes very hard for everyone to contribute equally when there is no plan or task-board. As I mentioned in the previous assignment's reflection, a Trello board would be useful to have. Unfortunately, by the time I'd realised, we'd gotten too far with this Brief for it to be of any real value to create one.
Introduction
Projectile physics in video games often mimic that of real-life, to varying degrees of realism. A ‘projectile’ can be defined simply as an object moving through space where gravity is the only force acting on it (The Physics Classroom, n.d.). For the purpose of this paper, projectiles that would pierce the stratosphere - such as intercontinental missiles and rockets - are excluded from consideration. This report will explain the basic operations of actual physics, with simulated examples, including definitions for speed, velocity and acceleration. Then the application of these concepts in video games will be discussed, as well as the importance of realistic physics in games.
Real-World Physics
Whilst the projectile is the moving object itself, ‘projectile motion’ is the path of movement or trajectory the object takes once launched. It will always come back down toward the centre of the Earth, provided it remains close enough to the ground (Byjus, n.d., Projectile Motion). The effect of gravity can only be accurately calculated if it is applied to the acceleration of the object, rather than the velocity (Moody, 1951).
Forward motion occurs on the x axis, while downwards motion due to gravity occurs on the y axis. θ is the angle of the projectile when launched. To illustrate this, I created a diagram (fig. 1). Vi is the initial velocity of the projectile, whilst Vi1 is the initial velocity on the y axis and Vi2 is the initial velocity on the x axis.
Fig. 1
Fig. 2
Fig. 3
Fig. 4
There are three primary components of physics; speed, velocity and acceleration. They are all different, but work interconnectedly (Accelerate Learning, n.d.).
Speed is the rate at which an object moves, and is arguably the easiest to calculate.
s = d ÷ t where d is the total distance travelled, and t is the time taken
Velocity is the rate at which the position of an object changes. It is very similar to the equation for speed, but velocity requires values for both speed and direction (Byjus, n.d.). A typical classroom example is "x km/h, North".
v = p ÷ t where p is the total displacement and t is the time taken
Distance refers to the actual number of increments travelled, whereas displacement is the shortest path between two points. For instance, a winding road may have a distance of 20km, but a displacement of 15km (see fig. 2).
Acceleration is a change in velocity (Khan Academy, 2015).
a = (vf - vi) ÷ t where vf is the final velocity, vi is the initial velocity and t is the time taken
To see projectile physics in action, I ran two simple simulations (The Physics Classroom, n.d.). Disregarding the object's mass and air resistance during both, the first projectile was launched at a 45° angle from a starting height of 0m, at 60m/s (see fig. 3). It reached its highest point at 91.84m on the y axis, before the force of gravity outweighed the force of launch. It took 4.31 seconds to reach the apex of the motion path and a total of 8.659 seconds to complete. The projectile travelled 367.35m along the x axis before returning to the starting position of 0m vertically. It had an average velocity of 42.424m/s (rounded to three decimal places).
In the second simulation, the projectile was not launched from the ground but rather from a starting height of 100m. It also moved at a speed of 60m/s, but at an angle of 0° - i.e. perfectly horizontally (see fig. 4). The projectile travelled a total distance of 271.06m in a time of 4.518 seconds, before reaching the ground. It had an average velocity of 59.1m/s.
Video-Game Physics
It is important to have physics in games because it confines events to a scope which the player will understand. In other words, the player will automatically understand that a gun or bow is used to fire a projectile at a target, so the developers do not have to spend time, effort and money explaining. Note, this is different from gameplay instructions such as “press E to aim”. The most well-known example of real-world physics being used in a video game is the Angry Birds series. Air resistance is disregarded in the game, as it is in most equations, so the only force acting on the bird once it has been launched from the slingshot is gravity (Allain, 2010).
Many games use a physics engine in some capacity. The main purpose of such an engine is to perform two functions: collision detection and force simulation (Yeates, 2014). This means that as the player interacts with the world and thus inputs a force, such as throwing a spear, the engine simulates the effect of gravity on the spear as well as other objects that it may interact with. For example, if a player throws the spear at a far-off target, the physics engine will predict the trajectory by simulating gravity and air resistance, before triggering collision detection once the spear lands.
Jeffrey (2019) explains that although realistic physics are important for games to have, it can be better to "bend the rules" a bit to keep a game fun and playable. If the physics are too realistic, the mechanics can become granular and punishing - which can ruin the player's fun. Additionally, it takes up memory and processing time for a computer if it must constantly simulate reality. The classic paddle game, Pong, uses highly simplified physics - the ball goes in one direction until it collides with something, and then goes in the other. What makes Pong fun is its simplicity (Parker & King, 2008).
One of the best ways to simulate projectile physics when developing a game is to use ray-casting. This is a line that the computer draws between two points, and can be used to determine if there is anything in the way (a more detailed explanation can be found under Week 10's Learning Journal). Ray-casting is often used for in-game projectiles such as bullets fired from a gun, and is highly beneficial because it can determine whether or not the bullet hits a target. The term for this process is 'hitscan' (Jung, 2019). However, for a more realistic simulation developers may need to include separate code in order to apply physics to the gun and bullets. Adding recoil is the most common method but certain games generate a new object when the gun is fired. This allows the physics engine to apply collision detection and force simulation to create a far more realistic experience. Jung (2019) gives the examples of Max Payne and Sniper Elite.
Conclusion
Video-games almost always utilise real-world physics to create an environment that players will intuitively understand. If a projectile does not behave in a predictable manner, it can become difficult for the player to learn the game and thus takes away from their enjoyment. Real-life physics, particularly projectile physics, are hard to understand and even more complicated to bake into a game. Speed, velocity and acceleration are interrelated equations that help us calculate how fast a projectile moves, whilst projectile motion is the trajectory that the object takes. Games can simulate the effects of physics using specialised engines which run the calculations behind the scenes to achieve a semi-realistic render. Very few games have entirely realistic physics, partly because it is incredibly expensive for the computer to process and partly because the fun can quickly dissipate if it is too punishing. Most games use a simplified version of projectile physics to make it easier and more enjoyable for players, without sacrificing an understandable system of events.
References:
Accelerate Learning. (n.d.). Speed, Velocity and Acceleration. t.ly/CkT0
Allain, R. (2010, October 8). The Physics of Angry Birds. Wired. https://www.wired.com/2010/10/physics-of-angry-birds/
Byjus. (n.d.). Projectile Motion. Retrieved August 23, 2022 from https://byjus.com/physics/projectile-motion/
Byjus. (n.d.). Velocity. Retrieved August 23, 2022 from https://byjus.com/physics/velocity/
Jeffrey, C. (2019, December 23). And Action! An Examination of Physics in Video Games. Techspot. t.ly/eKnI
Jung, T. (2019, December 6). How Do Bullets Work in Video Games?. Game Developer. t.ly/-VDba
Khan Academy. (2015). What is Acceleration?. Retrieved August 24, 2022 from t.ly/xa-Y
Moody, E. A. (1951). Laws of Motion in Medieval Physics. The Scientific Monthly, 72(1), 18–23. https://www.jstor.org/stable/19879
Parker, L. & King, D. (2008, April 17). Why Pong scored so highly for Atari. The Guardian. t.ly/Z_eP
The Physics Classroom. (n.d.). Projectile Simulator Interactive. Retrieved August 24, 2022 from t.ly/5XZk
The Physics Classroom. (n.d.). What is a Projectile?. Retrieved August 23, 2022 from t.ly/VON_
Yeates, R. (2014, July 29). What’s In A Projectile Physics Engine?. Envato Tuts. t.ly/3Dl4