The Heart and Soul of Computer Science

Software Engineering exists within two primary constraints: 

•Process execution speed
•Relative memory allocation 

Each application performs under these limitations. The problem emerges when a "user" is introduced into the equation. 
Realizing that we are building with an end user(a human like you and I) in mind is pivotal to success.

"Latency is a measure of time delay experienced in a system. Limits the maximum rate that information can be transmitted, as there is often a limit on the amount of information that is "in-flight" at any one moment. In the field of human-machine interaction, perceptible latency has a strong effect on user satisfaction and usability." 
~wikipedia


For that same reason storage of data and retrieval of that data are top of mind when engineering behavioral logic. This is where algorithms do the heavy lifting in the following ways:
  1. Sorting data
  2. Searching data
  3. Storing data
These algorithms are integral of data structures. The heart and soul of data structures begins with: 
  • It's easier to store data of different sizes in a linked list. An array assumes every element is exactly the same size.
  • As you mentioned, it's easier for a linked list to grow organically. An array's size needs to be known ahead of time, or re-created when it needs to grow.
  • Shuffling a linked list is just a matter of changing what points to what. Shuffling an array is more complicated and/or takes more memory.
  • As long as your iterations all happen in a "foreach" context, you don't lose any performance in iteration.
All data is collected, sorted, and acted upon based on application logic. There are different problems for different domains. The algo determines how the item is being created and also how the search would be conducted. The main take away here is to be efficient in ways that match the needs of an application's best use. 
Time complexity is a popular term and gage for the number and cost of each of these algorithms computations.
The algorithm to know what data structure to use.
Hash tables: 
combination of arrays with small lists attached to each array element. Hashing function is used to compute a value for an element. To insert, compute the hash, use it to find a list in the array, then add the element into the list. 

To lookup an element, compute the hash value, find the list in the array and walk through it. Getting the hashing function right, and experimenting with the size of the array, are important. 


Trees: 
A divide and conquer data structure. Logarithmic performance. Binary Tree, B tree, balanced trees. 

Lists: 

Must be built dynamically. Cheap insertion at beginning, 
storage fits data exactly. Good for dynamic small sets of data. Usually one by adding 'next' pointer to structure. 

Arrays: 
great for compile time data. Continuous blocks of memory. Good for data where the size is known, or relatively static. Can be built at compile-time, or dynamically at run-time. 

video
Insertion is pricy since it can involve moving many elements around or even using memory allocation to obtain memory to fit the new element in. Lookup can be made very fast however, since the array can be sorted and a binary search algorithm applied.


Reducing the amount of steps in retrieval is paramount. Optimizing the opportunity cost of each step is an art and science in software engineering.