Programming Techniques
This article focuses on some of the most popular problem-solving techniques that are widely used.
Technique: Prefix Sum Array
A Prefix Sum Array is an array where each element holds the sum of all the previous elements. In other words, each component of the prefix sum array is the cumulative sum up to that point in the original array.
Problem Statement:
For an array arr[]
of length n
, the prefix sum array prefix_sum[]
is defined as:
prefix_sum[0] = arr[0]
for
prefix_sum[i] = prefix_sum[i-1] + arr[i]i > 0
Example:
arr = [1, 2, 3, 4, 5]
prefix_sum[0] = 1
prefix_sum[1] = prefix_sum[0] + prefix_sum[1] = 3
prefix_sum[2] = prefix_sum[2] + prefix_sum[1] = 6
prefix_sum[3] = prefix_sum[3] + prefix_sum[2] = 10
prefix_sum[4] = prefix_sum[4] + prefix_sum[3] = 15
prefix_sum = [1, 3, 6, 10, 15]
Now that we have calculated the prefix sum array, we can find the sum of a specific position or range.
- What is the prefix sum of index 3? → 10
- What is the prefix sum of the range from index i=2 to j=4? → 12
Algorithm:
sum(i, j) = prefix_sum[j] - prefix_sum[i-1] (if i > 0)
If i = 0, then sum(0, j) = prefix_sum[j]
Time Complexity: O(n) (constructing the prefix sum array).
Space Complexity: O(n) (storing the prefix sum array). O(1) (if can use the original array)
Technique: Two Pointers
The two-pointer technique is an efficient way to search or iterate over arrays. Often, this technique reduces time complexity from O(n²) to O(n).
Concepts:
- One pointer starts at the beginning, and another at the end of an array, string, etc.
- The pointers move toward each other based on the condition to solve problems such as palindrome checks, finding a pair of elements, solving a subarray problem, or merging sorted arrays.
Problem Statement:
Palindrome Check: The reversed string is the same as the original string.
Algorithm:
s = 'abcba'
i = 0 # pointer A
j = len(s) - 1 # pointer B
while i < j:
if s[i] != s[j]:
return False
i += 1 # pointer A moving towards to pointer B
j -= 1 # pointer B moving towards to pointer A
return True
Problem Statement:
Finding a pair of elements with a given sum: The task is to find a pair of elements equal to the given sum (The array is sorted).
Algorithm:
arr = [1, 2, 3, 4, 6, 8, 9]
target = 10
left = 0 # pointer A
right = len(arr) - 1 # pointer B
while left < right:
sum = arr[left] + arr[right]
if sum == target:
return True
if sum < target:
left += 1 # moving pointer A if the sum is less than the target; array is sorted
else:
right -= 1 # moving pointer B if the sum is greater than the target; array is sorted
return []
Time Complexity: O(n)
Space Complexity: O(1)
Technique: Sliding Window
The sliding window is an optimized technique to solve problems involving all subarrays or substrings of a specific size. Instead of looping on the array or string multiple times, the sliding window allows us to do that once and find the desired result based on conditions and constraints.
Types of Sliding Window:
- Fixed-size Sliding Window: The size of the window remains constant.
- Dynamic-size Sliding Window: The size of the window changes based on some condition (e.g., when a target condition is met).
Problem Statement:
Maximum Sum of a Subarray of Size k
Algorithm:
# find the initial sum of the sliding window
value = sum(nums[:sliding_window])
max_sum = value
# sliding window from left to right
for i in range(1, len(nums) - sliding_window + 1):
value = value - nums[i-1] + nums[i+sliding_window-1]
max_sum = max(value, max_sum)
return max_sum
arr = [2, 3, 5, 8, 9, 10]
sliding_window = 3
This optimization code only sums once for the sliding window, substructing the element left behind and adding the new element that enters the sliding window.
Time Complexity: O(n)
Space Complexity: O(1)
Technique: Fast & Slow Pointers
Fast and slow pointer is a modified version of the two-pointer technique. This technique involves two pointers, where one pointer moves fast and another moves slowly. Eventually, the fast pointer will catch up to the slow pointer.
If there is a cycle in the data structure (e.g., a linked list), the fast pointer will eventually catch the slow pointer, and they will meet. If there is no cycle, the fast pointer will reach the end of the list.
Problem Statement:
Detecting a cycle in a linked list. The goal is to find a cycle in a linked list using fast and slow pointers.
Algorithm:
1 -> 2 -> 3 -> 4 -> 5 -> 3 (cycle starts again at node 3)
def has_cycle(head):
# Initialize two pointers
slow = head # slow pointers
fast = head # fast pointers
# Traverse the list
while fast and fast.next:
slow = slow.next # Move slow pointer by 1 step
fast = fast.next.next # Move fast pointer by 2 steps
# If the two pointers meet, there is a cycle
if slow == fast:
return True
# If fast pointer reaches the end, there is no cycle
return False
Problem Statement:
Finding the starting point of a cycle in a linked list. The goal is to find the start of the cycle, and we can do that by doing the following:
- Move the
slow
pointer back to the head of the list. - Move both the
slow
andfast
pointers one step at a time. - The point where they meet again will be the start of the cycle.
Algorithm:
def find_cycle_start(head):
# Detect if there is a cycle
slow, fast = head, head
has_cycle = False
# Step 1: Detect cycle using slow and fast pointers
while fast and fast.next:
slow = slow.next
fast = fast.next.next
if slow == fast:
has_cycle = True
break
# If no cycle is found, return None
if not has_cycle:
return None
# Step 2: Find the start of the cycle
slow = head
while slow != fast:
slow = slow.next
fast = fast.next
# The point where slow and fast meet is the start of the cycle
return slow
Problem Statement:
Finding the middle of a linked list. If there is no cycle, the slow and fast pointer technique can be used to find the middle of a linked list. The slow pointer moves one step at a time, while the fast pointer moves two steps. When the fast pointer reaches the end, the slow pointer will be at the middle.
Algorithm:
def find_middle(head):
slow = head # slow pointer
fast = head # fast pointer
# Move the slow pointer one step, and the fast pointer two steps
while fast and fast.next:
slow = slow.next
fast = fast.next.next
# Slow pointer is now at the middle
return slow
Problem Statement:
Checking for duplicates or patterns in arrays. Given an array of n + 1
integers where each integer is between 1
and n
(inclusive), there is precisely one duplicate number in the array. Your task is to find this duplicate number without modifying the array and using constant extra space (i.e., space complexity should be O(1)).
Algorithm:
def find_duplicate(arr):
# Step 1: Use slow and fast pointer to find the cycle
slow = arr[0]
fast = arr[0]
# Move slow one step and fast two steps until they meet
while True:
slow = arr[slow]
fast = arr[arr[fast]]
if slow == fast:
break
# Step 2: Find the starting point of the cycle (the duplicate number)
slow = arr[0]
while slow != fast:
slow = arr[slow]
fast = arr[fast]
# The point where slow and fast meet is the duplicate number
return slow
Time Complexity: O(n)
Space Complexity: O(1)
Technique: LinkedList In-place Reversal
Linked List In-place Reversal refers to reversing a linked list by manipulating the pointers directly to reverse the direction of the linked list nodes.
Problem Statement:
- The basic idea is to reverse the
next
pointers of the nodes in the linked list so that they point to the previous node instead of the next node. - This process is done iteratively, traversing the list once and adjusting the pointers individually.
Approach:
Initialize three pointers:
prev
(initiallyNone
): This will store the previous node as we reverse the links.current
(initially pointing to the head): This will be the node currently being processed.next_node
(initiallyNone
): This will store the next node in the list so we don't lose track of the list during reversal.
Algorithm
Input: 1 -> 2 -> 3 -> 4 -> 5
Output: 5 -> 4 -> 3 -> 2 -> 1
def reverse_linked_list(head):
prev = None # first pointer
current = head # second pointer
while current:
# Store the next node
next_node = current.next # third pointer
# Reverse the current node's pointer
current.next = prev
# Move pointers one step forward
prev = current
current = next_node
# prev will be the new head of the reversed list
return prev
Time Complexity: O(n)
Space Complexity: O(1)
Technique: Monotonic Stack
A Monotonic Stack is a stack data structure in which the elements are arranged in a specific order, either monotonically increasing or monotonically decreasing. This means that the stack either maintains an increasing or decreasing sequence of elements from bottom to top. It is often used to solve problems related to finding the next greater element or next smaller element and other range-based or window-based problems.
Problem Statement:
Next Greater Element
Given an array, find the next element greater for each component. The idea is to traverse the array and use a stack to keep track of indices where the next greater element has yet to be found. Once a greater element is found, the indices from the stack are popped, and the corresponding result array is updated with the next greater element.
Algorithm:
n = len(arr) # Get the length of the array
stack = [] # Initialize an empty stack to store indices
result = [-1] * n # Initialize the result array with -1
for i in range(n): # Iterate through the array
while stack and arr[i] > arr[stack[-1]]: # While stack is not empty and current element is greater than the element at the index stored at the top of the stack
result[stack.pop()] = arr[i] # Pop the index from the stack and update result at that index with the current element (next greater element)
stack.append(i) # Push the current index to the stack
return result # Return the result array containing the next greater elements for each element in the input array
Problem Statement:
Next Smaller Element
Given an array, find the next smaller element for each element.
Algorithm:
n = len(arr)
result = [-1] * n # Initialize the result array with -1
stack = [] # Monotonically increasing stack
# Traverse the array from left to right
for i in range(n):
# Pop elements from the stack while they are greater than the current element
while stack and arr[i] < arr[stack[-1]]:
result[stack.pop()] = arr[i] # Current element is the next smaller element for popped elements
# Push the current index to the stack
stack.append(i)
return result
Problem Statement:
Largest Rectangle in Histogram
Find the largest rectangle in a histogram based on bar heights.
Algorithm:
- Initialize a stack to store the indices of the histogram bars.
- Iterate through the heights array. For each height, check if the current bar is shorter than the bar at the top of the stack. If it is, pop the bar from the stack and calculate the area of the rectangle where the popped bar is the smallest height. The rectangle's width can be calculated using the difference between the current index and the index of the bar just below the popped bar in the stack.
- After the loop, pop any remaining bars in the stack and calculate their corresponding areas.
- Keep track of the maximum rectangle area encountered.
stack = [] # Stack to store the indices of the histogram bars
max_area = 0
n = len(heights)
for i in range(n):
# Ensure the stack is monotonically increasing
while stack and heights[i] < heights[stack[-1]]:
height = heights[stack.pop()] # Pop the top of the stack
width = i if not stack else i - stack[-1] - 1 # Calculate width
max_area = max(max_area, height * width) # Calculate area
# Push current bar's index to the stack
stack.append(i)
# After the loop, process any remaining bars in the stack
while stack:
height = heights[stack.pop()]
width = n if not stack else n - stack[-1] - 1
max_area = max(max_area, height * width)
return max_area
Time Complexity: O(n)
Space Complexity: O(n)
Technique: Top K Elements
The Top ‘K’ Elements problem is a common algorithmic problem where you are asked to find the top K
largest or smallest elements in a given collection. Depending on the specific constraints, this problem can be solved efficiently using different approaches, such as sorting, heaps, or quickselect algorithms.
Problem Statement:
Given an array arr[]
of n
elements and an integer K
, find the K
largest (or smallest) elements from the array.
arr = [3, 1, 5, 12, 2, 11]
K = 3
Top 3 largest elements: [12, 11, 5]
The sorting approach is often time-consuming, taking O(n log n) time. Here, we discuss the Heaps and Quick Select algorithms.
Algorithm:
Min-Heap
The more efficient approach, especially when K
is much smaller than n
, is to use a min-heap (priority queue). This approach works as follows:
- First, build a min-heap of size
K
using the firstK
elements of the array. - For each remaining element in the array, compare it with the root of the heap.
- If it is larger than the root, replace the root with the current element and heapify.
- The heap will store the
K
largest elements.
This method ensures that you maintain the top K
largest elements, and because the heap only stores K
elements, the complexity is optimized.
import heapq
def top_k_elements_min_heap(arr, K):
# Build a min-heap with the first K elements
min_heap = arr[:K]
# Convert to a min-heap
heapq.heapify(min_heap)
# Iterate over the remaining elements
for i in range(K, len(arr)):
if arr[i] > min_heap[0]: # If current element is greater than the heap's root
heapq.heappop(min_heap) # Remove the smallest element
heapq.heappush(min_heap, arr[i]) # Add the current element
# The heap contains the top K largest elements
return list(min_heap)
Time Complexity: O(n log K)
Space Complexity: O(K)
Quickselect
Quickselect is an efficient selection algorithm for finding the K
-th smallest or largest element in an unordered list. It is related to the quicksort algorithm and can find the top K
elements in expected linear time, though its worst-case time complexity is O(n^2)
.
Here is a high-level overview of how quickselect works:
- Choose a pivot element from the array.
- Partition the array into elements smaller than the pivot and elements larger than the pivot.
- Recursively apply quickselect to one of the partitions, depending on the value of
K
.
This approach is especially efficient when K
is relatively small.
def partition(arr, left, right):
pivot = arr[right]
i = left
for j in range(left, right):
if arr[j] > pivot: # Change to '<' for smallest K elements
arr[i], arr[j] = arr[j], arr[i]
i += 1
arr[i], arr[right] = arr[right], arr[i]
return i
def quickselect(arr, left, right, K):
if left == right:
return arr[left]
pivot_index = partition(arr, left, right)
if K == pivot_index:
return arr[pivot_index]
elif K < pivot_index:
return quickselect(arr, left, pivot_index - 1, K)
else:
return quickselect(arr, pivot_index + 1, right, K)
def top_k_elements_quickselect(arr, K):
return quickselect(arr, 0, len(arr) - 1, K - 1)
Time Complexity: O(n)
Space Complexity: O(1)
Technique: Overlapping Intervals
Overlapping Intervals refer to intervals on a number line where one interval either fully or partially overlaps another. Given two intervals [a1, b1]
and [a2, b2]
, they overlap if they share any common points, i.e., if:
a1 <= b2
anda2 <= b1
.
When working with a collection of intervals, the problem often involves merging all overlapping intervals into one larger interval and ensuring non-overlapping intervals remain separate.
Problem Statement:
Given a list of intervals [[1, 3], [2, 6], [8, 10], [15, 18]]
, the overlapping intervals [1, 3]
and [2, 6]
should be merged to form [1, 6]
, and the rest remain unchanged. The final output will be [[1, 6], [8, 10], [15, 18]]
.
Algorithm:
- Sort the Intervals: First, sort the intervals by their starting point (or endpoint if needed).
- Iterate Through the Intervals: Iterate through the sorted list and check if the current interval overlaps the previous one.
- Merge Overlapping Intervals: If they overlap, merge them into a single interval by taking the maximum end time of the two intervals. If they do not overlap, store the previous interval and move to the next one.
- Return the Result: Once all intervals have been processed, return the list of merged intervals.
def merged_intervals(intervals):
if not intervals:
return []
# sort the intervals by their starting point
intervals.sort(key=lambda x: x[0])
# take the first interval as a starting point
result = [intervals[0]]
# Iterate through the sorted list
for current_interval in intervals[1:]:
# fetch the last interval from the result
check_interval = result[-1]
# check if the current interval overlaps the previous one.
if current_interval[0] <= check_interval[1]:
# overlap exists and merge the intervals by updating the end time of the current interval
check_interval[1] = max(check_interval[1], current_interval[1])
else:
# no overlaps
result.append(current_interval)
return result
intervals = [[1, 3], [2, 6], [8, 10], [15, 18]]
merged = merged_intervals(intervals)
print("Merged intervals:", merged)
Time Complexity: O(n log n)
Space Complexity: O(n)
Technique: Binary Search
Binary search is a robust algorithm that functions efficiently in O(log n) time under specific conditions. To successfully apply binary search, two crucial requirements must be met:
1. Monotonicity (Sorted or Ordered Data):
— Data must adhere to a consistent order, whether sorted in ascending/descending order or monotonic (strictly increasing, decreasing, or non-decreasing/non-increasing).
— Binary search divides the search space, necessitating an ordered structure to effectively determine which half contains the desired value.
2. Deterministic Condition:
— An unambiguous condition is critical to decide which half of the data to search next.
— This condition helps eliminate half the search space after each comparison, streamlining the binary search process.
Confirming that the data meets these conditions is vital to harnessing the power of binary search.
Algorithm Blueprint:
def binary_search(array) -> int:
def condition(value) -> bool:
pass
left, right = min(search_space), max(search_space)
while left < right:
mid = left + (right - left) // 2
if condition(mid):
right = mid
else:
left = mid + 1
return left
Time Complexity: O(log n)
Space Complexity: O(log n)
Technique: Backtracking
A backtracking algorithm is a powerful algorithmic technique that incrementally solves problems by building solution candidates step by step. If a candidate doesn’t achieve the desired outcome, it gracefully “backtracks” by undoing the last step and exploring alternative possibilities.
Problem Statement:
N-Queens Problem: Strategizing the placement of N queens on an NxN board.
Backtracking involves the following features:
- Generate Candidates
- Validity Check
- Backtracking Steps
Algorithm for N-Queens and Overall Backtracking Problem:
def is_complete_solution(solution, n):
# Solution is complete if we have placed N queens
return len(solution) == n
def is_valid(candidate, solution):
# Check if the queen placement is valid or not
row, col = candidate
for r, c in solution:
if c == col or abs(r - row) == abs(c - col): # Same column or diagonal
return False
return True
def generate_candidate(solution, n):
# The current row where a queen needs to be placed
row = len(solution)
# Try all columns in the current row
return [(row, col) for col in range(n)]
def process_solution(solution, solutions):
# Append the valid solution to the solutions list
solutions.append(solution[:]) # Add a copy of the solution
def n_queens(n):
solutions = [] # To store all valid solutions
def backtrack(solution):
# Check if we found a solution
if is_complete_solution(solution, n):
# Save the solution
process_solution(solution, solutions)
return
# Explore the candidates
for candidate in generate_candidate(solution, n):
if is_valid(candidate, solution):
solution.append(candidate) # Choose
backtrack(solution=solution) # Explore
solution.pop() # Undo (backtrack)
backtrack([]) # Start with an empty solution
return solutions # Return the list of all valid solutions
# Get all solutions for n=4 queens
solutions = n_queens(4)
# Print all solutions
for idx, solution in enumerate(solutions):
print(f"Solution {idx + 1}: {solution}")
Time Complexity: O(N!)
Space Complexity: O(N+N!)
Technique: Binary Tree Traversal
The binary tree traversal technique involves traversing all the nodes in a binary tree in a specific order.
Traversing orders are:
- Pre-order: root → left → right
- In-order: left → root → right
- Post-order: left → right → root
Problem Statement:
Traver a binary tree using all the traversing order.
Algorithm:
Binary Tree
# designing a class to make binary tree
class Tree():
def __init__(self, val=0, left=None, right=None):
self.val = val
self.left = left
self.right = right
# binary tree
def binary_tree():
root = Tree(1)
root.left = Tree(2)
root.right = Tree(3)
root.left.left = Tree(4)
root.left.right = Tree(5)
root.right.right = Tree(6)
return root
Pre-Order & In-Order & Post-Order Traverse:
# pre order traversal
def pre_order_traverse(root):
if root is None:
return []
return [root.val] + pre_order_traverse(root=root.left) + \
pre_order_traverse(root=root.right)
# in order traversal
def in_order_traverse(root):
if root is None:
return []
return in_order_traverse(root=root.left) + [root.val] + \
in_order_traverse(root=root.right)
# post order traversal
def post_order_traverse(root):
if root is None:
return []
return post_order_traverse(root=root.left) + \
post_order_traverse(root=root.right) + [root.val]
Time Complexity: O(n)
Space Complexity: O(log n)
Technique: Depth-First Search (DFS)
Depth-first search (DFS) is an algorithm for traversing or searching tree or graph data structures. Starting from the root node, DFS explores each branch as far along as possible before backtracking. It uses a LIFO (Last-In, First-Out) approach, typically implemented using recursion or an explicit stack.
Applications of DFS:
- Pathfinding and Puzzle Solving (e.g., mazes).
- Finding Connected Components in a graph.
- Topological Sorting (in Directed Acyclic Graphs).
- Detecting Cycles in a graph.
- Solving problems with backtracking such as N-Queens, Sudoku, etc.
Problem Statement:
Traverse a graph using the DFS technique.
Algorithm
Recursive technique
def dfs_recursive(graph, node, visited=None):
if visited is None:
visited = set()
# add the root node to visited
visited.add(node)
print(node, end=' ')
# exploring the neighbors
for neighbor in graph[node]:
if neighbor not in visited:
dfs_recursive(graph, neighbor, visited)
# graph
graph = {
1: [2, 3],
2: [4, 5],
3: [],
4: [],
5: []
}
Using stack
def dfs_stack(graph, start):
# track of visited node
visited = set()
# using stack for DFS
stack = [start]
while stack:
node = stack.pop()
if node not in visited:
print(node, end=' ')
visited.add(node)
for neighbor in reversed(graph[node]):
if neighbor not in visited:
stack.append(neighbor)
# graph
graph = {
1: [2, 3],
2: [4, 5],
3: [],
4: [],
5: []
}
Time Complexity: O(V+E); V = Vertices/nodes & E = Edges
Space Complexity: O(V)
Technique: Breadth-First Search (BFS)
Breadth-First Search (BFS) is a graph traversal algorithm that explores the nodes of a graph (or tree) level by level. Starting from the root node, BFS explores all its neighbours before moving on to their neighbours. This exploration continues until all nodes are visited. It uses a queue (FIFO) data structure to keep track of the nodes at each level.
Applications of BFS:
- Shortest Path in Unweighted Graphs: BFS guarantees to find the shortest path regarding the number of edges in an unweighted graph.
- Connected Components: BFS can help find connected components in a graph.
- Cycle Detection: BFS can be used to detect cycles in undirected graphs.
- Solving Puzzles: BFS is ideal for solving puzzles like mazes, where all possible moves are explored level by level.
- Web Crawlers: BFS explores the web level by level (i.e., site by site).
Problem Statement:
Traverse a graph using the DFS technique.
Algorithm:
from collections import deque
def bfs(graph, root):
# visited nodes, queue to implement FIFO
visited = set()
queue = deque([root])
visited.add(root)
while queue:
# FIFO
node = queue.popleft()
print(node, end=' ')
# exploring the neighbors
for neighbor in graph[node]:
if neighbor not in visited:
queue.append(neighbor)
visited.add(neighbor)
graph = {
1: [2, 3],
2: [4, 5],
3: [],
4: [],
5: []
}
print("BFS Traversal:", end=" ")
bfs(graph, 1)
BFS in a binary tree or Level-Order Traversal:
def bfs_binary_tree(root):
if root is None:
return []
queue = deque([root])
while queue:
# FIFO
node = queue.popleft()
print(node, end=' ')
if node.left:
queue.append(node.left)
if node.right:
queue.append(node.right)
Time Complexity: O(V+E); V = Vertices/nodes & E = Edges
Space Complexity: O(V)
Technique: Matrix Traversal
Matrix traversal refers to visiting or iterating over the elements in a 2D matrix (grid) in a specific order. A matrix is a 2D array where each element is identified by its row and column indices.
A DFS approach to traverse the matrix:
def dfs(matrix, i, j, visited):
rows = len(matrix)
cols = len(matrix[0])
# base case
if i < 0 or j >= rows or j < 0 or j >= cols or visited[i][j]:
return
visited[i][j] = True
print(matrix[i][j], end=' ')
# explore in all four directions: up, down, left, right
dfs(matrix, i-1, j, visited) # up
dfs(matrix, i+1, j, visited) # down
dfs(matrix, i, j-1, visited) # left
dfs(matrix, i, j+1, visited) # right
def dfs_traversal(matrix):
rows = len(matrix)
cols = len(matrix[0])
visited = [[False for _ in range(cols)] for _ in range(rows)]
# Traverse starting from (0, 0) for the whole matrix
dfs(matrix, 0, 0, visited)
Problem Statement:
The Flood Fill algorithm is similar to the “paint bucket” tool in graphics programs, where you start from a given pixel and fill all connected pixels with a new colour. The connection between pixels is typically based on the four neighbouring cells (up, down, left, and right) that share the same initial colour as the starting pixel.
The goal is to traverse all the connected cells (based on the same colour) and change them to the new colour.
Algorithm:
from collections import deque
def flood_fill_dfs(image, sr, sc, newColor):
# using DFS
original_color = image[sr][sc]
# base condition: if it's already the same color
if original_color == newColor:
return image
def dfs(i, j):
# base condition: check other constraints
if i < 0 or i >= len(image) or j < 0 or j >= len(image[0]) or \
image[i][j] != original_color:
return
# Change the color of the current cell
image[i][j] = newColor
# perform dfs to all possible directions
dfs(i - 1, j) # Up
dfs(i + 1, j) # Down
dfs(i, j - 1) # Left
dfs(i, j + 1) # Right
dfs(sr, sc)
return image
def flood_fill_bfs(image, sr, sc, newColor):
original_color = image[sr][sc]
# base condition: if it's already the same color
if original_color == newColor:
return image
queue = deque([sr, sc])
# BFS
while queue:
x, y = queue.popleft()
# explore all four directions
for dx, dy in [(-1, 0), (1, 0), (0, -1), (0, 1)]:
new_x, new_y = dx + x, dy + y
# check if new coordinates are valid and have the original color
if 0 <= new_x < len(image) and 0 <= new_y < len(image[0]) and \
image[new_x][new_y] == original_color:
# Change the color and add the cell to the queue
image[new_x][new_y] = newColor
queue.append((new_x, new_y))
return image
Time Complexity: O(m * n)
Space Complexity: O(m * n)
Technique: Dynamic Programming
Dynamic programming (DP) is an algorithmic technique for solving optimization problems. It involves breaking them down into simpler subproblems, solving each subproblem just once, and storing its solution to avoid recomputing it multiple times. DP is beneficial for problems with overlapping subproblems and optimal substructure.
Critical Concepts of Dynamic Programming:
- Overlapping Subproblems: The problem can be divided into subproblems reused multiple times.
- Optimal Substructure: The optimal solution of the original problem can be constructed from the optimal solutions of its subproblems.
- Memoization: Storing the results of expensive function calls and reusing them when the same inputs occur again. This is usually a top-down approach.
- Tabulation: Solving the problem iteratively and storing the results in a table (often a 2D array). This is a bottom-up approach.
Problem Statement:
Finding Fibonacci sequence for a given number using top-down and bottom-up approaches.
Algorithm:
Top-Down Approach:
def dp_top_down(problem_params):
# Memoization table (dictionary or array) to store already computed subproblem solutions
memo = {}
# Helper function to solve the subproblems recursively
def solve(subproblem_params):
# Base case: If the subproblem is trivial or already known
if is_base_case(subproblem_params):
return base_case_value(subproblem_params)
# If the result for this subproblem is already computed, return it
if subproblem_params in memo:
return memo[subproblem_params]
# Break the subproblem into smaller subproblems and solve them
result = float('inf') # Initialize result with an impossible or infinity value
for smaller_subproblem in get_subproblems(subproblem_params):
result = min(result, solve(smaller_subproblem)) # Modify as per the problem needs
# Store the computed result in memo table
memo[subproblem_params] = result
return result
# Call the recursive function with the initial problem parameters
return solve(problem_params)
Fibonacci Sequence
def fibonacci_top_down(n):
memo = {}
def fib(n):
if n <= 1:
return n
if n in memo:
return memo[n]
memo[n] = fib(n - 1) + fib(n - 2)
return memo[n]
return fib(n)
# Example usage
print(fibonacci_top_down(10)) # Output: 55
Bottom-Up Approach
def dp_bottom_up(problem_size):
# Create a table (array or 2D matrix) to store the results of subproblems
dp_table = [0] * (problem_size + 1) # Modify size and initialization as per problem needs
# Initialize base case(s) in the table
dp_table[0] = base_case_value # Example: set the base case
# Iterate through and fill the table by solving smaller subproblems
for i in range(1, problem_size + 1):
dp_table[i] = float('inf') # Set an initial impossible value
for subproblem in get_possible_subproblems(i):
dp_table[i] = min(dp_table[i], dp_table[subproblem] + some_cost) # Modify as per the problem needs
# The answer to the original problem is now stored in dp_table[problem_size]
return dp_table[problem_size]
Fibonacci Sequence
def fibonacci_bottom_up(n):
if n <= 1:
return n
dp = [0] * (n + 1)
dp[0], dp[1] = 0, 1
for i in range(2, n + 1):
dp[i] = dp[i - 1] + dp[i - 2]
return dp[n]
# Example usage
print(fibonacci_bottom_up(10)) # Output: 55
Time Complexity: O(n)
Space Complexity: O(n)
Conclusion
This article delves into the most utilized programming concepts, offering various resources from various sources and experiences. It aims to assist you in tackling numerous problems and preparing for interviews. Best of luck with your endeavours! Thank you for taking the time to read it. 🙌