# What Are Data Structures And Algorithms?

Computer science, data science, data structures, algorithms -- the lingo that comes with coding and building web applications can be complex and confusing! While some terms are specific to what you want to build, and some are related to the structure of programs instead of the actual mechanics that power them, knowing these major concepts is part of becoming an effective and trusted computer science expert.

Two of the most important terms for computer science enthusiasts to know are data structures and algorithms, often thought of as the building blocks of computer science. Both of these fundamental elements are needed to solve common CS problems and provide efficient and clear solutions. Perhaps the simplest way to think of them is that one provides the way to solve a problem, while the other involves how you organize and manage the data you are using to solve the problem.

So what are data structures and algorithms? What are the basics you should know as you continue to take coding courses or enroll in data science classes at your local school? This guide will walk through the essential elements of each, the differences and misconceptions between them, and next steps to continue learning more about these fundamentally important concepts.

## What Are Data Structures?

The basic definition of a data structure is a format for organizing, managing, and storing data specifically to make it more easy and efficient to access or modify. It is made up of a collection of data values, functions or operations that can be applied to the data itself, and the relationships among data values.

In more simple terms, many computer science problems are based on elements within the data itself. This means that attempting to solve CS problems requires not only the data, but also methods for organizing and accessing that data. Once the data is better structured for manipulation, developers can construct operations to add, modify, delete, or change the data.

Many data structures have the standard organization of data at the memory level, but they offer different functions and operations unique to the needs of a CS specialist. To make this a little more clear, first let’s explore some of the most common and widely used data structures:

• Linear Data Structure: Linked List, Stack, Queue, Array.
• Hierarchical Data structures: Tree, Heap, Trie.
• Miscellaneous Data Structures: HashMap, Graph, Matrix.

Not only are there “tiers” of data structures, but there are also different options that provide different functions and methods of data organization and manipulation within each tier. While a CS beginner won’t know the difference between these right away, experienced CS professionals rely on specific data structures for specific needs depending on the problems they are attempting to solve.

There are other considerations when experts choose to utilize data structures for their purposes. Memory allocation is perhaps the biggest consideration -- how much space and complexity will be required in order to perform a specific operation, for example. Another significant concern is the ability to customize the structure, in order to assist in problem-solving and avoid wasting time with inefficient data structures.

For intermediate computer scientists who are familiar with the concept of object-oriented programming, you can also think of data structures as similar to “classes” -- tools for collecting similar sets of data in one specific place. However, data structures additionally provide techniques for manipulating the data, beyond simply gathering or organizing it.

## What Are Algorithms?

An algorithm is a finite sequence of well-defined, computer-implementable instructions. They can perform computations, and can also be designed to solve particular classes of problems. Algorithms are clear, direct specifications for performing calculations, as well as enabling data processing, automated reasoning, and other tasks.

To again put this in layman’s terms, algorithms are a series of steps that are clearly defined and understandable by the computer in order to solve particular problems or classes of problems. Similar to data structures, there are also different classes of algorithms that are specifically designed for unique purposes. Here are some commonly used examples:

• Sorting Algorithms: Merge Sort, Quick Sort, Tim Sort, etc.
• Searching Algorithms: Linear Search, Binary Search.
• Shortest Path Algorithms: Dijkstra’s algorithm, Bellman-Ford algorithm.

Experienced CS professionals often explain algorithms as the logic behind a particular program, not just the complete code or core of the program. Knowing which algorithms work best for performing different tasks isn’t just essential for getting a program to actually function, but also for streamlining the user experience and avoiding lag, bugs, and time-consuming functions when more simple and elegant solutions would serve.

But how do you know if an algorithm is efficient or effective? There are two central methods for measuring the efficacy of algorithms:

### Space Complexity

Space complexity refers to the amount of memory space required by the algorithm while it is being executed. Space complexity is critically important for multi-user systems, as well as situations where limited memory is available. Beyond just being a hassle when working slowly, for accessing critically important data, an overly complex algorithm can pretend the speed of decision-making that draws professionals to data science in the first place.

How do you know how much space an algorithm requires? Put simply, an algorithm usually requires space for the following components :

• Instruction Space: This fixed space, which can vary depending on how many lines of code are in a particular program, is the space required to store the executable version of the program.
• Data Space: This is the space needed to store all the constants and variables
• Environment Space: The space required to store environmental information needed to resume a suspended function.

### Time Complexity

The surface explanation of time complexity is simple - time complexity refers to the amount of time required by a program to run to completion. For obvious reasons, algorithms that can complete their functions in the shortest possible time are always preferred, both for the decreased load on the memory and the quicker access of results of the function.

Time Complexity is most often estimated by tallying the number of elementary steps performed by any algorithm to finish execution. Ever the realists, computer scientists often calculate an algorithm’s worst possible time complexity (because different types of data input can affect algorithm performance differently) to measure it by the longest possible duration of time the algorithm will require.

So why are these two concepts so important for you to know, even if your interests lie elsewhere in computer science or data science? The simple answer is, they are inseparable -- one provides the raw materials for computer science work, while the other provides the tools for “harvesting” those raw materials. Additionally, the plans you have for developing a robust Python or Ruby application are largely dependent on finding the most efficient logic and data structures to support the back-end of your application.

Need more reasons? Here are some of the best:

• Improving user experience on the front-end: Beyond simply making an app or program look cool, clean and seamless user experience is essential for positive word of mouth and adoption of your brand. Knowing the best methods of accessing data and returning that data on the front-end for users is key to keeping them coming back.
• Streamlining research and query-based data science: Especially true for research environments on strict time or budgetary deadlines, finding the fastest method to manipulate raw data and create informed and accurate reports to present to your clients or superiors is essential to job success in the field.
• You’ll have a better chance and getting a job: Many large companies now look for developers and programmers who don’t just know code, but are well-versed in the best methods of structuring data and algorithms to get the most out of that code (in simple terms, to keep the code clean). Which leads to the next point….
• You’ll make more money at your next job: Because developers with significant data structure and algorithm knowledge can handle more complex projects (or manage less experienced developers on those projects), the average starting salary for an experienced CS professional is higher than the usual salaries for beginning developers.
• You can make better business and operational decisions by manipulating your data effectively: Analytics, predictive technology, and trend research are becoming essential tools for businesses in a variety of economic sectors. The more you know the tricks and tools available to learn from your data to better guide decision-making, the better your own business or web application can operate!