Creating the Queue Data Structure in Golang

• Jason Ladd

Queues are a fundamental data structure used everywhere in programming. They're are a linear structure built on top of arrays, which means they represent sequential data. Queues are basically used to model situations in the real world where a sequence of things needs to follow "first in, first out", or FIFO behavior. A common example of this is a line of people waiting to check out in a store. You definitely want people to be attended to in the order they entered the line, otherwise you'd have some angry customers. So if you were creating some type of online shopping experience, you'd use a queue to accomplish the digital version of people waiting in line. Also, I'm aware that many other non-American English speakers call the real world equivalent "queues" anyway.

Of course, we're gonna be using Go for this. So the first thing we're going to do is declare a struct to represent our queue. We're gonna need five properties on it: a capacity, size, front, back, and a slice. The capacity will let us restrict it to a fixed size, the front will be index of the element that's "first in line", the back will of course be the index of the element last in line, and we'll use our slice as the array of elements in the queue.

One important concept to understand is that even though we're going to use a slice to hold the data of the queue, we won't be shifting any elements around when some are added or removed. Just think about a worst case scenario where we remove the front element from the queue and the queue is really long, this means we'd have to shift every single element in the slice forward by one space, which would be O(n) in terms of time complexity. If we have to do that every time we remove the front element, that would be pretty inefficient. So instead, what we're gonna do is just let the "front" property be an index that we shift forward by one space whenever we remove, or "dequeue" an element. Doing it this way makes both adding and removing elements O(1) in terms of time complexity. So let's create the struct:


type Queue struct {
	Front    int
	Back     int
	Capacity uint
	Size     uint
	Slice    []any
	mu       sync.Mutex
} 

We're also going to add a mutuex to avoid race condintions when mutating a queue instance.

And we need a function that will let us instantiate a queue of our specified size:


func CreateQueue(capacity uint) *Queue {
	return &Queue{
		Front:    -1,
		Back:     -1,
		Capacity: capacity,
		Size:     0,
		Slice:    make([]any, capacity),
	}
}

The Methods

Now, let's add some methods to our queue. Since we have a capacity associated with it, we need to be able to check if it's full. We also need to check if it's empty before we remove something from it. Since we'll be using the size property to keep track of how many elements are in the queue at any point in time, all we need to do is check if it's equal to capacity to know if the queue is full. And of course, if size is 0, the queue is empty. You might be wondering why we don't just check the index of the back to know if our queue is full, but you've gotta rememeber that the front index can be anywhere in the slice since we're only moving its index along the slice like a pointer. I'll go into more detail on how to handle issues associated with this later in the post.


func (qu *Queue) IsFull() bool {
	return qu.Size == qu.Capacity
}

func (qu *Queue) IsEmpty() bool {
	return qu.Size == 0
}

Then of course, we need to be able to add elements to the back of the queue and have a way for elements to leave the front of it. Let's first see how to add to the queue with a method we'll call "enqueue":


func (qu *Queue) Enque(data any) {
	qu.mu.Lock()
	defer qu.mu.Unlock()

	if qu.IsFull() {
		return
	}

	if qu.Front == -1 {
		qu.Front = 0
	}

	qu.Slice[qu.Back+1] = data
	qu.Size++
	qu.Back++
}

First off, since we're making mutations to a queue, we wanna do the locking and unlocking with the mutex to keep it concurrency safe. Then we check if the queue is already full using the IsFull method we declared earlier. In our CreateQueue function, we initialize the front and back indicies to -1 because they don't exist in the slice yet, so when enqueue our first element, we know that the it will technically be both the front and the back, so we'll set front to the 0th index and increment back by one (which also sets it to 0 for the first enqueue). Every enqueue after that will leave front where it is, set the element right after the last element to the new back, increment size by one, and increment back by one. Notice how we can basically keep adding elements to the back of the list until its full.

Now, we need a way of "dequeueing" or allowing elements to leave the queue. Like I mentioned earlier, actually removing an element from the slice would require shifting all elements after the one removed up by one which is O(n). We definitely don't want that, so we'll just be moving the front index up by one every time we dequeue. That method will look like this:


func (qu *Queue) Dequeue() any {
	qu.mu.Lock()
	defer qu.mu.Unlock()

	if qu.IsEmpty() {
		return math.MinInt // the minimum int is just a way for us to know if adding failed
	}

	data := qu.Slice[qu.Front]
	qu.Front++
	qu.Size--

	return data
}

Again, We use the mutex for concurrency safety, then check if the queue is empty. To actually do the dequeueing, we get the element at the current front index, shift our front index "pointer" to the next element, then decrement the size. Finally we return whatever that first element was. Now we've got a working queue! And we can add to it and remove from it in O(1) time so everything is efficient... but wait... have you noticed something?

This queue will work fine if it was created, enqueued up to its capacity, dequeued and that's it. But what would happen if we needed to constantly be enqueueing and dequeing elements. We would very likely run into a situation where we'd try to enqueue elements that would have indicies outside the bounds of our slice!


	qu.Slice[qu.Back+1] = data

This line right here is the culprit... in the enqueue method, we check to see if the queue is full, but just because size is small enough doesn't mean the index at Back+1 will fall inside the bounds of our slice. Here's an example. Say we have a queue with a capacity of 10 elements like this:

_ _ _ _ _ _ _ _ _ _ 

and we enqueue a bunch of elements

a b c d e f g h i j 

then we dequeue up to the last couple of elements

_ _ _ _ _ _ _ _ i j 

Here, i is the front element, j is the back and size is only 2, but there aren't any spaces after j to store any more elements! So what should we do?

Circular Queues

The answer is, we need to be able to just wrap around back to the beginning of the slice and store elements there, like this:

k l m n o p _ _ i j 

This fixes our problem because we don't have to change anything about the way the front and the back of the queue works. i is still at the front and p is now the back. If we were to dequeue i, j, and k for example, l would just be the new front and p would still be the back:

_ l m n o p _ _ _ _ 

Making the queue 'circular' like this allows us to continuously add to it without needing to re-size the underlying slice as long as we keep the size within the queue's capacity. Let's look at what changes we'd need to make to accomplish this.


func (cq *Queue) Enque(elem any) {
	cq.mu.Lock()
	defer cq.mu.Unlock()

	if qu.IsFull() {
		return
	}

	if qu.Front == -1 {
		qu.Front = 0
	}

	cq.Slice[cq.Back] = elem
	cq.Back = (cq.Back + 1) % cq.Capacity
	cq.Size++
}

All we have to do is use the modulo operator to get the remainder from dividing the index of the 'new' back element by the capacity and we'll get it's offset from the front of the slice. Then we can do the same thing with the front since we could possibly dequeue past the length of the slice too.


func (cq *Queue) Dequeue() any {
	cq.mu.Lock()
	defer cq.mu.Unlock()

	if qu.IsEmpty() {
		return math.MinInt // the minimum int is just a way for us to know if adding failed
	}

	elem := cq.Slice[cq.Front]
	cq.Front = (cq.Front + 1) % cq.Capacity
	cq.Size--
	return elem
}

And that's basically all there is to implementing queues in Go!

The whole Implementation


package main

import (
	"fmt"
	"math"
	"sync"
)

type Queue struct {
	Front    int
	Back     int
	Capacity uint
	Size     uint
	Slice    []any
	mu       sync.Mutex
}

func (qu *Queue) IsFull() bool {
	return qu.Size == qu.Capacity
}

func (qu *Queue) IsEmpty() bool {
	return qu.Size == 0
}

func CreateQueue(capacity uint) *Queue {
	return &Queue{
		Front:    -1,
		Back:     -1,
		Capacity: capacity,
		Size:     0,
		Slice:    make([]any, capacity),
	}
}

func (cq *Queue) Enque(elem any) {
	cq.mu.Lock()
	defer cq.mu.Unlock()

	if qu.IsFull() {
		return
	}

	if qu.Front == -1 {
		qu.Front = 0
	}

	cq.Slice[cq.Back] = elem
	cq.Back = (cq.Back + 1) % cq.Capacity
	cq.Size++
}

func (cq *Queue) Dequeue() any {
	cq.mu.Lock()
	defer cq.mu.Unlock()

	if qu.IsEmpty() {
		return math.MinInt // the minimum int is just a way for us to know if adding failed
	}

	elem := cq.Slice[cq.Front]
	cq.Front = (cq.Front + 1) % cq.Capacity
	cq.Size--
	return elem
}

func (qu *Queue) Print() {
	for i := range qu.Size {
		fmt.Printf("%v, ", qu.Slice[i+uint(qu.Front)])
	}
	fmt.Print("\n")
}

Creating Stack Data Structures in Golang

So, basically a stack is a linear data structure that follows the principle of LIFO (Last in, First out). An easy way to think of this is having a stack of books on a desk. As you layer books on top of each other, you have to remove the ones at th...