An array is a collection of elements stored at contiguous memory locations. Each element can be accessed directly using its index (position number starting from 0).
Think of it like a row of numbered lockers โ you can instantly open locker #5 without checking lockers #1 through #4.
Because elements are contiguous, accessing any index takes O(1) time โ the computer calculates the exact memory address using: base_address + (index ร element_size).
| Operation | Time | Why? |
|---|---|---|
| Access by Index | O(1) | Direct memory calculation |
| Search (unsorted) | O(n) | Must check each element |
| Search (sorted) | O(log n) | Binary Search |
| Insert at end | O(1) | Append directly |
| Insert at beginning | O(n) | Shift all elements right |
| Delete | O(n) | Shift elements to fill gap |
Use two pointers (usually one at the start, one at the end) to solve problems in O(n) instead of O(nยฒ). Works great on sorted arrays.
Maintain a "window" of elements and slide it across the array. Avoids recalculating from scratch each time โ turns O(nรk) into O(n).
A linked list is a linear data structure where each element (called a node) contains data and a pointer/reference to the next node. Unlike arrays, nodes can be stored anywhere in memory.
| Operation | Time | Note |
|---|---|---|
| Insert at Head | O(1) | Just change head pointer |
| Insert at Tail | O(n) | Must traverse to end |
| Delete a Node | O(n) | Must find the node first |
| Search | O(n) | Linear scan required |
| Access by Index | O(n) | No random access! |
This is the most asked linked list question in interviews. Use three pointers: prev, curr, and next.
A stack is like a stack of plates โ you can only add or remove from the top. The last plate placed on top is the first one you pick up.
A queue is like a line at a store โ the first person in line is the first one served. Elements enter from the rear and leave from the front.
Given a string containing '(', ')', '{', '}', '[', ']', determine if the input string is valid. Every open bracket must have a matching close bracket in the correct order.
A tree where each node has at most two children: left and right. The topmost node is the root. Nodes with no children are called leaves.
Three ways to visit every node in a binary tree using depth-first search. Remember the mnemonic: the name tells you when you visit the root.
In-Order (Left โ Root โ Right): 1, 3, 6, 8, 10, 14
โ Gives sorted order for BST! โญ
Pre-Order (Root โ Left โ Right): 8, 3, 1, 6, 10, 14
โ Used to copy/serialize a tree
Post-Order (Left โ Right โ Root): 1, 6, 3, 14, 10, 8
โ Used to delete a tree
Almost every tree problem follows: Base case (null node) โ Recurse left โ Recurse right โ Combine results. Master this pattern!
| Algorithm | Best | Average | Worst | Space | Stable? |
|---|---|---|---|---|---|
| Bubble Sort | O(n) | O(nยฒ) | O(nยฒ) | O(1) | โ |
| Selection Sort | O(nยฒ) | O(nยฒ) | O(nยฒ) | O(1) | โ |
| Insertion Sort | O(n) | O(nยฒ) | O(nยฒ) | O(1) | โ |
| Merge Sort | O(n log n) | O(n log n) | O(n log n) | O(n) | โ |
| Quick Sort | O(n log n) | O(n log n) | O(nยฒ) | O(log n) | โ |
Small arrays (<50): Insertion Sort. General purpose: Merge Sort (stable) or Quick Sort (fast in practice). Most languages use Timsort (hybrid of merge + insertion).
The most important searching algorithm. Works on sorted arrays by repeatedly halving the search space. Time: O(log n) โ finds any element in a billion-item array in ~30 steps!
Array: [2, 5, 8, 12, 16, 23, 38, 56, 72, 91]
Step 1: mid = arr[4] = 16 โ 16 < 23 โ search RIGHT half
Step 2: mid = arr[7] = 56 โ 56 > 23 โ search LEFT half
Step 3: mid = arr[5] = 23 โ 23 == 23 โ FOUND at index 5! โ
Only 3 comparisons instead of 10 (linear scan)!
A hash map stores key-value pairs and provides O(1) average lookup. It uses a hash function to convert keys into array indices.
A graph consists of vertices (nodes) connected by edges. Graphs model networks, social connections, maps, and dependencies.
DP is about breaking a problem into overlapping subproblems and storing results to avoid recalculation. Two approaches: top-down (memoization) and bottom-up (tabulation).
Without DP (Naive Recursion):
fib(5) calls fib(4)+fib(3), fib(4) calls fib(3)+fib(2)...
fib(3) is calculated multiple times! โ O(2โฟ) ๐ข
With DP (Memoization):
Store fib(3) result after first calculation. Reuse it! โ O(n) โก
Ask yourself: "Can I solve this using solutions to smaller versions of the same problem?" and "Are there overlapping subproblems?" If YES to both โ use DP!
If n = 10โถ (1 million), your algorithm needs to be O(n log n) or better to run in ~1 second. O(nยฒ) would take ~10ยนยฒ operations โ far too slow!
| Data Structure | Access | Search | Insert | Delete |
|---|---|---|---|---|
| Array | O(1) | O(n) | O(n) | O(n) |
| Linked List | O(n) | O(n) | O(1) | O(1) |
| Hash Map | O(1) | O(1) | O(1) | O(1) |
| BST (balanced) | O(log n) | O(log n) | O(log n) | O(log n) |
| Stack / Queue | O(n) | O(n) | O(1) | O(1) |
You've completed the DSA Beginner's Notes! Keep practicing on LeetCode, and remember โ consistency beats intensity.
"The only way to learn DSA is to solve problems. Every day."