Theory
Introduced in Python 3.10, this is far more powerful than a "switch" statement. It allows you to "destructure" complex data (like lists or dictionaries) and match them against specific shapes. You can also add Guards (an if condition) to filter matches even further.
Code Snippets
Python
# Matching complex data structures
def process_command(command):
match command.split():
case ["quit"]:
print("Shutting down...")
case ["load", filename]:
print(f"Loading: {filename}")
case ["move", x, y] if int(y) > 0: # Pattern with a 'Guard'
print(f"Moving to {x}, {y}")
case _: # Wildcard catch-all
print("Unknown command.")
process_command("move 10 20")
Think About It: What does the _ represent in a match statement?
Answer: It is the "wildcard" or "catch-all" pattern. It matches anything that didn't fit the previous cases, preventing your code from failing when it encounters unexpected input.
Think About It: What happens if data matches two different case blocks?
Answer: Python executes only the first match it finds from top to bottom and then exits the block.
Common Mistake:
Forgetting the Wildcard _
Theory:
In match/case, if you provide a value that doesn't match any defined case and you haven't included a catch-all, the block simply does nothing. This can lead to silent failures in your application logic.
Code Snippet:
Python
# MISTAKE: No catch-all for unexpected types
def process_status(status_code):
match status_code:
case 200:
return "Success"
case 404:
return "Not Found"
# If status_code is 500, the function returns None silently.
Edge Case Scenario:
Floating Point Logic in Match/Case:
Theory: Pattern matching works with values, but be careful with floats.
•The Trap: case 0.1 + 0.2: might not match 0.3 due to floating-point precision errors. Always use integers or strings for reliable pattern matching.
Code Snippet:
Python
#DANGEROUS
val = 0.1 + 0.2
match val:
case 0.3:
print("Match")
case _:
print(f"No match for {val}")
# Output: No match for 0.30000000000000004
Theory
This is a one-line expression used to assign a value based on a condition. It is designed to make simple if/else assignments more concise without cluttering the code with multiple lines.
Code Snippets
Python
age = 20
# Standard way
if age >= 18:
status = "Adult"
else:
status = "Minor"
# Ternary way (Expression)
status = "Adult" if age >= 18 else "Minor"
print(status)
Think About It: Should you nest multiple ternary operators in one line?
Answer: No. While possible, it makes code extremely hard to read. If you need more than one condition, stick to a standard if/elif/else block.
Common Mistake:
Using Ternary for Logic Flow (if/else ternary):
Theory:
The ternary operator is an expression designed to return a value. New developers often use it to trigger functions or actions, which makes the code harder to read than a standard if block.
Code Snippet:
Python
#MISTAKE: Using ternary for actions
(send_alert() if error_count > 10 else log_warning())
Edge Case Scenario:
The "Short-Circuit" Oversight:
Theory: In ternary operators and if statements, Python uses Short-Circuiting.
•In A or B, if A is True, B is never evaluated.
•The Danger: If B is a function call that was supposed to save data, that save will never happen!
Code Snippet:
Python
def save_data():
print("Data Saved!")
return True
if True or save_data():
print("Done")
# Output: "Done" (save_data never ran!)
Theory
Formally called "Assignment Expressions," the Walrus operator allows you to assign a value to a variable inside another expression (like an if or while statement). This prevents redundant code and double calculations.
Code Snippets
Python
# Practical Example: Reading user input until 'exit'
# Without Walrus, you'd have to define 'command' twice.
while (command := input("Enter command: ")) != "exit":
print(f"Executing {command}")
# Avoiding double calculation
data = [1, 2, 3, 4, 5]
if (n := len(data)) > 3:
print(f"List is too long: {n} elements")
Think About It: Can you use the Walrus operator to initialize a variable on its own line like (x := 10)?
Answer: You can, but you shouldn't. The Walrus is intended for use inside expressions (like if or while). For a standard assignment, x = 10 is cleaner and more readable.
Common Mistake:
Misusing the Walrus Operator:
Theory:
Don't use := just to be "clever."
• Bad: x := 10 (Just use x = 10).
• Good: Only use it when the assignment is part of a larger expression (like if or while).
Code Snippet:
Python
#Good Use Case
if (n := len(data)) > 0:
print(f"Processing {n} items")
Edge Case Scenario:
Scoping Leaks:
Theory: Unlike variables defined inside a comprehension, a variable assigned with a Walrus operator inside an if or while statement remains available in the local scope even after the block ends.
Code Snippet:
Python
# Variable 'n' survives the block
if (n := len([1, 2, 3])) > 0:
pass
print(n) # Output: 3 (The variable "leaked" out to the rest of the function)
Theory
Comprehensions provide a compact way to create new collections. They are generally faster than standard for loops because the iteration is optimized at the C-level within the Python interpreter.
Code Snippets
Python
nums = [1, 2, 3, 4, 5, 6]
# 1. List Comprehension: [expression for item in iterable if condition]
squares = [x**2 for x in nums if x % 2 == 0] # [4, 16, 36]
# 2. Set Comprehension: (removes duplicates automatically)
unique_chars = {char.upper() for char in "apple"} # {'A', 'P', 'L', 'E'}
# 3. Dict Comprehension: {key: value for item in iterable}
square_map = {x: x**2 for x in range(3)} # {0: 0, 1: 1, 2: 4}
Think About It: If I use a List Comprehension to process a file with 10 million rows, what is the biggest risk?
Answer: Memory exhaustion (RAM crash). A List Comprehension builds the entire list in memory at once, which could crash your system if the dataset is too large.
Common Mistake:
Over-Complicating Comprehensions:
Theory:
New developers often try to fit too much logic into a list comprehension.
Mistake: Using nested if/else and three for-loops inside one bracket.
# MISTAKE
result = [x.upper() if x.startswith('a') else x.lower() for x in data if len(x) > 5 for y in range(5)]
Fix: If the comprehension is longer than one line, it’s usually better to write a standard for loop for readability.
#FIX
result = []
for x in data:
if len(x) > 5:
for y in range(5):
val = x.upper() if x.startswith('a') else x.lower()
result.append(val)
Edge Case Scenario:
Side-Effects in Comprehensions:
Theory: Technically, you can call functions inside a comprehension (e.g., [print(x) for x in list]).
This is considered very poor style. Comprehensions should be used to create data, not to perform actions (side-effects).
Code Snippet:
Python
#POOR STYLE
[print(x) for x in range(5)]
Theory
A Generator expression looks like a list comprehension but uses parentheses (). The critical difference: it does not store the entire list in memory. Instead, it "yields" one item at a time only when requested, making it incredibly memory-efficient for large datasets.
Code Snippets
Python
# List comprehension: Uses memory for 1 million items
big_list = [x**2 for x in range(1000000)]
# Generator expression: Uses almost zero memory
big_gen = (x**2 for x in range(1000000))
print(next(big_gen)) # 0
print(next(big_gen)) # 1
Think About It: When should you prefer a Generator over a List?
Answer: When working with massive datasets or files. If you only need to iterate through the data once, a generator saves your computer's RAM from crashing.
Think About It: If you need to sort a dataset or find its total length, can you use a Generator?
Answer: Not directly. Sorting or calculating length requires knowing all the data at once. You would have to convert the generator to a list first, which loses the memory benefit.
Common Mistake:
The "Generator Exhaustion" Bug: A generator can only be looped through once.
Code Snippet:
Python
gen = (x for x in range(3))
print(list(gen)) # [0, 1, 2]
print(list(gen)) # [] <- The generator is now empty!
Edge Case Scenario:
The len() Limitation:
Theory: Since generators yield data one item at a time and do not store the full dataset, they do not have a "length." Trying to check the size of a generator will raise a TypeError.
Code Snippet:
Python
gen = (x for x in range(100))
# print(len(gen)) #TypeError: object of type 'generator' has no len()
Theory:
Defensive coding is about anticipating errors. Python culture heavily favors EAFP (Easier to Ask Forgiveness than Permission)—using try/except—over LBYL (Look Before You Leap).
The "Defensive" Tactics
1. EAFP (Easier to Ask Forgiveness than Permission): Try the operation and catch errors, rather than checking if it’s "okay" first.
2. LBYL (Look Before You Leap): Check if a file or key exists before using it.
3. Fail Fast: Don't let a bug hide. Raise an error as soon as something looks wrong.
Code Snippets
Python
# EAFP Style (Recommended in Python)
try:
with open("data.txt") as f:
print(f.read())
except FileNotFoundError:
print("Log: File was missing, using defaults.")
# Fail Fast using 'assert'
def set_age(age):
assert age > 0, "Age cannot be negative!" # Crashes immediately if False
return age
Think About It: Why is EAFP generally faster in Python when an error is unlikely?
Answer: Because LBYL requires an extra "check" step (like checking if a file exists) every single time, whereas EAFP just proceeds and only spends extra time if a rare error actually occurs.
Common Mistake:
Over-Broad Exception Handling:
Theory:
A major mistake in EAFP (Easier to Ask Forgiveness than Permission) is catching every possible error using a bare except:. This hides bugs (like KeyboardInterrupt or NameError) that you didn't intend to catch.
Code Snippet:
Python
#MISTAKE: Catching everything
try:
result = 10 / user_input
except: #Bare except is dangerous
print("Something went wrong")
Edge Case Scenario:
The finally Trap:
Theory: In a try/except/finally block, if you include a return statement in the finally clause, it will override any return or even an exception raised in the try block.
Code Snippet:
Python
#'finally' always wins
def dangerous_function():
try:
return "Try Block"
finally:
return "Finally Block"
print(dangerous_function()) # Output: "Finally Block"