It wasn’t until I read a more thorough description of how data structures are stored in memory that I understood why a thing like a linked list can be helpful. This blog is a brief survey about how data is written and read from RAM.
When a computer runs code, it needs to keep track of specified things or values. These are known as variables and are stored temporarily in the computer’s Random Access Memory (RAM). RAM is different than the computer’s storage which is where we store files like mp3s, word documents, applications, etc. RAM has much less space for storage but it can access it’s contents much quicker while storage can hold a great deal more with slower access speed.
RAM itself is like a really big array where the indices are known as memory addresses (it’s like a set of a lot of shelves, billions of shelves). Each shelf stores one byte (8 bits) although some instances of RAM store more than one byte for each address. RAM is connected to the computer’s processor through a memory controller which has a channel to each one of the memory’s addresses. The one to one connection is important; this is how the processor can access an address in the memory in constant time (O(1)).
Even though RAM is accessible very quickly and can jump to any “shelf” with relative ease, functionally the processor usually accesses several shelves that are near each other at a time. Because of this the processor contains a little memory cache so that it can call a chunk of information even faster. So when it calls for memory at a specific address, it will also load adjacent addresses for later.
Since 1 byte can only store 256 different numbers at most, integers are usually stored in 4 bytes or 8 bytes (hence why RAM may contain more than 1 byte in each address). This is why in certain languages there are different declarations for integers and “longs” depending on the magnitude of the integer. For instance, when Gangnam Style reached 2³¹ views YouTube was forced to update the data type used for video views from 32 bit integers to 64 bit integers.
In summary fixed-width integers take up a predetermined amount of space (constant) and because of this operations on them take constant time. Being able to operate with them comes at the price of only being able to express 2^n numbers with n being the number of bits used to store them.
As we stated before, RAM is practically already an array. When we store an array we can just keep whatever data type we are storing in our array in adjacent memory addresses. The addresses act as our indices but what if each index holds more than 1 byte like we talked about before? If we were storing the cups of coffee we had drank in a day as integers in our array, if they are fixed-width integers (they probably will be) we can use constant time operations on them and can easily find indices within our coffee array by adding and multiplying the starting memory address of our array, the index we’re looking for and the size of each index in bits or bytes.
It follows that looking up items in our array is still done in constant time like our integers. This is a very useful feature of arrays but in order for it to work the array must uphold two important criteria:
- Each index in the array stores the same amount of bits
- The array is uninterrupted (each index is contiguous)
Arrays are famous for very efficient lookup times granted that you have enough uninterrupted space in RAM.
Strings and Pointers
Alphabetical characters and strings are stored in RAM by converting each character into it’s corresponding number using a given system like ASCII. So if we wanted to store the letter “A” and we know that it’s ASCII value is 65, with some information indicating to the processor to read it as a character not an integer we store 65.
Say that we wanted to store the names of cities we’ve visited. If we’re storing Tulsa (Oklahoma) as well as Truth or Consequences (New Mexico) the size of each element in the array will have to support the city with the longest title. In this case the element storing Tulsa will have a lot of extra space not to mention it may be hard to find contiguous storage if there are a lot of cities and the names are long.
This is an example where pointers can be useful. Instead of trying to find the right space in RAM to hold our array we can put the strings we want to store in whatever addresses can fit them and then just store the address of the first character in the string in our array. We are pointing to where that index’s content can be found. This allows for our strings to be different length and for us to use whatever memory spaces are available. The downside is that since our elements might be spread out through RAM the array’s elements can’t be queued up in the processor’s cache for even faster access. Despite that, an array using pointers like this still has constant lookup time.
Here’s where things get really interesting (not that they weren’t interesting before).
The arrays we described earlier are known as static arrays. In languages like C and Java you have to define the length of these arrays when you create them so that the computer knows how much space to allocate for them. This is also why you can’t just keep adding elements to an already full static array without getting an error. This is ineffective in a lot of cases. We want to create a collection of elements and sometimes we won’t know how big we want it be before adding more elements to it.
Upon initialization a dynamic array is given a length determined by the input. Whenever additional elements require the array to grow, a new bigger array (usually twice as big) is created in the nearest space in memory. The old array is iterated through and copied into the new array and then that space is freed. After that we can finally add our new elements to our larger array.
This size flexibility comes with a disadvantage. In operations above we had to iterate through each element which comes with a time complexity of O(n). In a lot of cases appending to our array can still be done in constant time however the tradeoff with dynamic arrays is being able to declare an array of indiscriminate size for sometimes having appends take O(n) time in the worst case.
What if we wanted a data structure that we didn’t have to explicitly define the size of on declaration and that always has constant time appends? This is where linked lists at last become useful.
Each node or element of a linked list is like a small array that stores the node’s value and a pointer to the next node. Since we’re using pointers again the nodes do not have to be in sequential addresses in memory. Linked lists as a class have a head node and a tail node. If we want to append or prepend a node to our linked list it can always be done in constant time. If we want to lookup or find a node in our list however we must iterate through each node in our list which takes O(n) time.
What we get with linked lists is constant appends and prepends meaning we can add nodes to our list to our hearts’ delight without worrying about inefficiency. What we lose is the ability to quickly find an element within which we can easily do in dynamic arrays.
Looking up values in an array or structure is very important and useful. This is probably why arrays (constant time access) are more commonly used over linked lists (O(n) time access).
Suppose that we wanted to keep track and be able to access the amount of times a character is used in the book “Building Java Programs” (I like this book). If we map each character to it’s ASCII value then we could store the amount of times “A” appears in address 65 and so on. This allows us to realize that an array really is a look-up table with two columns, index number and value, we just can’t change the index number column. Hash tables adapt the array-like structure of RAM to be able to access values based on a key other than an address.
Very briefly, to create a key for a value we take the key we want (say the letter “A”), pass it through a hashing method (find it’s ASCII value) and use the returned value as the address in memory to store and later access the value for that key. This is an extremely simple hashing method, in reality hashing methods are much more complex.
If two different keys come to the same value through our hashing method we call this a hashing collision. This can be accommodated in a few ways; one solution is to create a linked list at this address and iterate through each value until we find the right one. In the rare instance that all of the keys get hashed to the same value (at that point why not use a different hashing method?) we essentially have a linked list and the look-up time becomes O(n) again.
The advantage of using a hash table is fast access to values by their keys. The disadvantages are that in the case of hash collisions the look-up times suffer and also that while accessing values based on their keys is efficient the opposite is not true. In order to find a key based on it’s value we have to iterate through every element in our hash table.