Foundations · CS 101

Data Structures

A visual introduction to the two most foundational ways we organize data in memory — and the trade-off that underlies every choice between them.

A data structure is a way of arranging data in memory so that the operations you care about — accessing, inserting, finding, deleting — can be performed efficiently. Picking a structure is really picking which operations you want to be fast.

01

Array

A contiguous row of equal-sized cells. The address of any element is computable, so you jump straight to it — but making room in the middle forces everyone to shift.

access  ·  O(1)
insert  ·  O(n)
delete  ·  O(n)
contiguous memory
Initial state. Try any operation.
Access

Watch closely: access is instant because the computer can calculate address = base + i × size and leap directly to the cell. But insertion drags every later element one step right to clear a slot — that shuffle is the O(n) work. Deletion is the mirror image: close the gap by shifting everyone left.

02

Linked List

Nodes scattered anywhere in memory, each holding a value and a pointer to the next. No formula can jump to index i — you have to walk. But inserting and removing cost almost nothing.

access  ·  O(n)
insert  ·  O(1)*
delete  ·  O(1)*
scattered memory
Initial state. Notice the non-sequential memory addresses.
Find

The contrast is everything. Find walks the chain — every step is a pointer dereference, and you pay O(n) to reach the end. But insertion and deletion touch only two pointers: no element moves in memory, the surrounding nodes just rewire. The asterisk on O(1) means "given you already have the pointer to the spot" — locating it is still O(n).

The trade-off, at a glance

Operation What happens Array Linked List
Random access get index i O(1) O(n)
Insert at middle splice in a new element O(n) O(1)*
Delete at middle remove an element O(n) O(1)*
Memory layout where elements live contiguous scattered
Space overhead per element value only value + pointer